search

HDFS文件命令

Jun 07, 2016 pm 04:41 PM
hdfslinuxOrderdocument

HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1) 列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。 hdfs dfs -usageh

HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1)
列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。

hdfs dfs -usage
hadoop dfs -usage ls 
hadoop dfs -help
-fs <local>      specify a namenode
hdfs dfs -fs hdfs://test1:9000 -ls /</local>

——————————————————————————–
-df [-h] [path …] :
Shows the capacity, free and used space of the filesystem. If the filesystem has
multiple partitions, and no path to a particular partition is specified, then
the status of the root partitions will be shown.

$ hdfs dfs -df
Filesystem                 Size   Used     Available  Use%
hdfs://test1:9000  413544071168  98304  345612906496    0%

——————————————————————————–
-mkdir [-p] path … :
Create a directory in specified location.

-p Do not fail if the directory already exists

-rmdir dir … :
Removes the directory entry specified by each directory argument, provided it is
empty.

hdfs dfs -mkdir /tmp
hdfs dfs -mkdir /tmp/txt
hdfs dfs -rmdir /tmp/txt
hdfs dfs -mkdir -p /tmp/txt/hello

——————————————————————————–
-copyFromLocal [-f] [-p] localsrc … dst :
Identical to the -put command.

-copyToLocal [-p] [-ignoreCrc] [-crc] src … localdst :
Identical to the -get command.

-moveFromLocal localsrc …
Same as -put, except that the source is deleted after it’s copied.

-put [-f] [-p] localsrc …
Copy files from the local file system into fs. Copying fails if the file already
exists, unless the -f flag is given. Passing -p preserves access and
modification times, ownership and the mode. Passing -f overwrites the
destination if it already exists.

-get [-p] [-ignoreCrc] [-crc] src … localdst :
Copy files that match the file pattern src to the local name. src is kept.
When copying multiple files, the destination must b/e a directory. Passing -p
preserves access and modification times, ownership and the mode.

-getmerge [-nl] src localdst :
Get all the files in the directories that match the source file pattern and
merge and sort them to only one file on local fs. src is kept.

-nl Add a newline character at the end of each file.

-cat [-ignoreCrc] src … :
Fetch all files that match the file pattern src and display their content on
stdout.

#通配符? * {} []
hdfs dfs -cat /tmp/*.txt
Hello, Hadoop
Hello, HDFS
hdfs dfs -cat /tmp/h?fs.txt 
Hello, HDFS
hdfs dfs -cat /tmp/h{a,d}*.txt 
Hello, Hadoop
Hello, HDFS
hdfs dfs -cat /tmp/h[a-d]*.txt
Hello, Hadoop
Hello, HDFS
echo "Hello, Hadoop" > hadoop.txt
echo "Hello, HDFS" > hdfs.txt
dd if=/dev/zero of=/tmp/test.zero bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 0.93978 s, 1.1 GB/s
hdfs dfs -moveFromLocal /tmp/test.zero /tmp
hdfs dfs -put *.txt /tmp

——————————————————————————–
-ls [-d] [-h] [-R] [path …] :
List the contents that match the specified file pattern. If path is not
specified, the contents of /user/currentUser will be listed. Directory entries
are of the form:
permissions – userId groupId sizeOfDirectory(in bytes)
modificationDate(yyyy-MM-dd HH:mm) directoryName

and file entries are of the form:
permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
modificationDate(yyyy-MM-dd HH:mm) fileName

-d Directories are listed as plain files.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.
-R Recursively list the contents of directories.

hdfs dfs -ls /tmp
hdfs dfs -ls -d /tmp
hdfs dfs -ls -h /tmp
  Found 4 items
  -rw-r--r--   3 hdfs supergroup         14 2014-12-18 10:00 /tmp/hadoop.txt
  -rw-r--r--   3 hdfs supergroup         12 2014-12-18 10:00 /tmp/hdfs.txt
  -rw-r--r--   3 hdfs supergroup        1 G 2014-12-18 10:19 /tmp/test.zero
  drwxr-xr-x   - hdfs supergroup          0 2014-12-18 10:07 /tmp/txt
hdfs dfs -ls -R -h /tmp
  -rw-r--r--   3 hdfs supergroup         14 2014-12-18 10:00 /tmp/hadoop.txt
  -rw-r--r--   3 hdfs supergroup         12 2014-12-18 10:00 /tmp/hdfs.txt
  -rw-r--r--   3 hdfs supergroup        1 G 2014-12-18 10:19 /tmp/test.zero
  drwxr-xr-x   - hdfs supergroup          0 2014-12-18 10:07 /tmp/txt
  drwxr-xr-x   - hdfs supergroup          0 2014-12-18 10:07 /tmp/txt/hello

——————————————————————————–
-checksum src … :
Dump checksum information for files that match the file pattern src to stdout.
Note that this requires a round-trip to a datanode storing each block of the
file, and thus is not efficient to run on a large number of files. The checksum
of a file depends on its content, block size and the checksum algorithm and
parameters used for creating the file.

hdfs dfs -checksum /tmp/test.zero
  /tmp/test.zero	MD5-of-262144MD5-of-512CRC32C	000002000000000000040000f960570129a4ef3a7e179073adceae97

——————————————————————————–
-appendToFile localsrc … dst :
Appends the contents of all the given local files to the given dst file. The dst
file will be created if it does not exist. If localSrc is -, then the input is
read from stdin.

hdfs dfs -appendToFile *.txt hello.txt
hdfs dfs -cat hello.txt
  Hello, Hadoop
  Hello, HDFS

——————————————————————————–
-tail [-f] file :
Show the last 1KB of the file.

hdfs dfs -tail -f hello.txt
#waiting for output. then Ctrl + C
#another terminal
hdfs dfs -appendToFile - hello.txt
#then type something

——————————————————————————–
-cp [-f] [-p | -p[topax]] src …
Copy files that match the file pattern src to a destination. When copying
multiple files, the destination must be a directory. Passing -p preserves status
[topax] (timestamps, ownership, permission, ACLs, XAttr). If -p is specified
with no arg, then preserves timestamps, ownership, permission. If -pa is
permission. Passing -f overwrites the destination if it already exists. raw
namespace extended attributes are preserved if (1) they are supported (HDFS
only) and, (2) all of the source and target pathnames are in the /.reserved/raw
hierarchy. raw namespace xattr preservation is determined solely by the presence
(or absence) of the /.reserved/raw prefix and not by the -p option.
-mv src … dst :
Move files that match the specified file pattern src to a destination dst.
When moving multiple files, the destination must be a directory.
-rm [-f] [-r|-R] [-skipTrash] src … :
Delete all files that match the specified file pattern. Equivalent to the Unix
command “rm src”

-skipTrash option bypasses trash, if enabled, and immediately deletes src
-f If the file does not exist, do not display a diagnostic message or
modify the exit status to reflect an error.
-[rR] Recursively deletes directories
-stat [format] path … :
Print statistics about the file/directory at path in the specified format.
Format accepts filesize in blocks (%b), group name of owner(%g), filename (%n),
block size (%o), replication (%r), user name of owner(%u), modification date
(%y, %Y)

hdfs dfs -stat /tmp/hadoop.txt
    2014-12-18 02:00:08
hdfs dfs -cp -p -f /tmp/hello.txt /tmp/hello.txt.bak
hdfs dfs -stat /tmp/hadoop.txt.bak
hdfs dfs -rm /tmp/not_exists
    rm: `/tmp/not_exists': No such file or directory
echo $?
    1
hdfs dfs -rm -f /tmp/123321123123123
echo $?
0

——————————————————————————–
-count [-q] path … :
Count the number of directories, files and bytes under the paths
that match the specified file pattern. The output columns are:
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME

-du [-s] [-h] path … :
Show the amount of space, in bytes, used by the files that match the specified
file pattern. The following flags are optional:

-s Rather than showing the size of each individual file that matches the
pattern, shows the total (summary) size.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.

Note that, even without the -s option, this only shows size summaries one level
deep into a directory.

The output is in the form
size name(full path)

hdfs dfs -count /tmp
           3            3         1073741850 /tmp
hdfs dfs -du /tmp
    14          /tmp/hadoop.txt
    12          /tmp/hdfs.txt
    1073741824  /tmp/test.zero
    0           /tmp/txt
hdfs dfs -du -s /tmp
    1073741850  /tmp
hdfs dfs -du -s -h /tmp
    1.0 G  /tmp

——————————————————————————–
-chgrp [-R] GROUP PATH… :
This is equivalent to -chown … :GROUP …

-chmod [-R] MODE[,MODE]… | OCTALMODE PATH… :
Changes permissions of a file. This works similar to the shell’s chmod command
with a few exceptions.

-R modifies the files recursively. This is the only option currently
supported.
MODE Mode is the same as mode used for the shell’s command. The only
letters recognized are ‘rwxXt’, e.g. +t,a+r,g-w,+rwx,o=r.
OCTALMODE Mode specifed in 3 or 4 digits. If 4 digits, the first may be 1 or
0 to turn the sticky bit on or off, respectively. Unlike the
shell command, it is not possible to specify only part of the
mode, e.g. 754 is same as u=rwx,g=rx,o=r.

If none of ‘augo’ is specified, ‘a’ is assumed and unlike the shell command, no
umask is applied.

-chown [-R] [OWNER][:[GROUP]] PATH… :
Changes owner and group of a file. This is similar to the shell’s chown command
with a few exceptions.

-R modifies the files recursively. This is the only option currently
supported.

If only the owner or group is specified, then only the owner or group is
modified. The owner and group names may only consist of digits, alphabet, and
any of [-_./@a-zA-Z0-9]. The names are case sensitive.

WARNING: Avoid using ‘.’ to separate user name and group though Linux allows it.
If user names have dots in them and you are using local file system, you might
see surprising results since the shell command ‘chown’ is used for local files.

-touchz path … :
Creates a file of zero length at path with current time as the timestamp of
that path. An error is returned if the file exists with non-zero length

hdfs dfs -mkdir -p /user/spark/tmp
hdfs dfs -chown -R spark:hadoop /user/spark
hdfs dfs -chmod -R 775 /user/spark/tmp
hdfs dfs -ls -d /user/spark/tmp
    drwxrwxr-x   - spark hadoop          0 2014-12-18 14:51 /user/spark/tmp
hdfs dfs -chmod +t /user/spark/tmp
#user:spark
    hdfs dfs -touchz /user/spark/tmp/own_by_spark
#user:hadoop
useradd -g hadoop hadoop
su - hadoop
id
    uid=502(hadoop) gid=492(hadoop) groups=492(hadoop)
hdfs dfs -rm /user/spark/tmp/own_by_spark
rm: Permission denied by sticky bit setting: user=hadoop, inode=own_by_spark
#使用超级管理员(dfs.permissions.superusergroup = hdfs),可以无视sticky位设置

——————————————————————————–
-test -[defsz] path :
Answer various questions about path, with result via exit status.
-d return 0 if path is a directory.
-e return 0 if path exists.
-f return 0 if path is a file.
-s return 0 if file path is greater than zero bytes in size.
-z return 0 if file path is zero bytes in size, else return 1.

hdfs dfs -test -d /tmp
echo $?
    0
hdfs dfs -test -f /tmp/txt
echo $?
    1

——————————————————————————–
-setrep [-R] [-w] rep path … :
Set the replication level of a file. If path is a directory then the command
recursively changes the replication factor of all files under the directory tree
rooted at path.
-w It requests that the command waits for the replication to complete. This
can potentially take a very long time.

hdfs fsck /tmp/test.zero -blocks -locations
    Average block replication:	3.0
hdfs dfs -setrep -w 4  /tmp/test.zero
    Replication 4 set: /tmp/test.zero
    Waiting for /tmp/test.zero .... done
hdfs fsck /tmp/test.zero -blocks
    Average block replication:	4.0
Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How does MySQL index cardinality affect query performance?How does MySQL index cardinality affect query performance?Apr 14, 2025 am 12:18 AM

MySQL index cardinality has a significant impact on query performance: 1. High cardinality index can more effectively narrow the data range and improve query efficiency; 2. Low cardinality index may lead to full table scanning and reduce query performance; 3. In joint index, high cardinality sequences should be placed in front to optimize query.

MySQL: Resources and Tutorials for New UsersMySQL: Resources and Tutorials for New UsersApr 14, 2025 am 12:16 AM

The MySQL learning path includes basic knowledge, core concepts, usage examples, and optimization techniques. 1) Understand basic concepts such as tables, rows, columns, and SQL queries. 2) Learn the definition, working principles and advantages of MySQL. 3) Master basic CRUD operations and advanced usage, such as indexes and stored procedures. 4) Familiar with common error debugging and performance optimization suggestions, such as rational use of indexes and optimization queries. Through these steps, you will have a full grasp of the use and optimization of MySQL.

Real-World MySQL: Examples and Use CasesReal-World MySQL: Examples and Use CasesApr 14, 2025 am 12:15 AM

MySQL's real-world applications include basic database design and complex query optimization. 1) Basic usage: used to store and manage user data, such as inserting, querying, updating and deleting user information. 2) Advanced usage: Handle complex business logic, such as order and inventory management of e-commerce platforms. 3) Performance optimization: Improve performance by rationally using indexes, partition tables and query caches.

SQL Commands in MySQL: Practical ExamplesSQL Commands in MySQL: Practical ExamplesApr 14, 2025 am 12:09 AM

SQL commands in MySQL can be divided into categories such as DDL, DML, DQL, DCL, etc., and are used to create, modify, delete databases and tables, insert, update, delete data, and perform complex query operations. 1. Basic usage includes CREATETABLE creation table, INSERTINTO insert data, and SELECT query data. 2. Advanced usage involves JOIN for table joins, subqueries and GROUPBY for data aggregation. 3. Common errors such as syntax errors, data type mismatch and permission problems can be debugged through syntax checking, data type conversion and permission management. 4. Performance optimization suggestions include using indexes, avoiding full table scanning, optimizing JOIN operations and using transactions to ensure data consistency.

How does InnoDB handle ACID compliance?How does InnoDB handle ACID compliance?Apr 14, 2025 am 12:03 AM

InnoDB achieves atomicity through undolog, consistency and isolation through locking mechanism and MVCC, and persistence through redolog. 1) Atomicity: Use undolog to record the original data to ensure that the transaction can be rolled back. 2) Consistency: Ensure the data consistency through row-level locking and MVCC. 3) Isolation: Supports multiple isolation levels, and REPEATABLEREAD is used by default. 4) Persistence: Use redolog to record modifications to ensure that data is saved for a long time.

MySQL's Place: Databases and ProgrammingMySQL's Place: Databases and ProgrammingApr 13, 2025 am 12:18 AM

MySQL's position in databases and programming is very important. It is an open source relational database management system that is widely used in various application scenarios. 1) MySQL provides efficient data storage, organization and retrieval functions, supporting Web, mobile and enterprise-level systems. 2) It uses a client-server architecture, supports multiple storage engines and index optimization. 3) Basic usages include creating tables and inserting data, and advanced usages involve multi-table JOINs and complex queries. 4) Frequently asked questions such as SQL syntax errors and performance issues can be debugged through the EXPLAIN command and slow query log. 5) Performance optimization methods include rational use of indexes, optimized query and use of caches. Best practices include using transactions and PreparedStatemen

MySQL: From Small Businesses to Large EnterprisesMySQL: From Small Businesses to Large EnterprisesApr 13, 2025 am 12:17 AM

MySQL is suitable for small and large enterprises. 1) Small businesses can use MySQL for basic data management, such as storing customer information. 2) Large enterprises can use MySQL to process massive data and complex business logic to optimize query performance and transaction processing.

What are phantom reads and how does InnoDB prevent them (Next-Key Locking)?What are phantom reads and how does InnoDB prevent them (Next-Key Locking)?Apr 13, 2025 am 12:16 AM

InnoDB effectively prevents phantom reading through Next-KeyLocking mechanism. 1) Next-KeyLocking combines row lock and gap lock to lock records and their gaps to prevent new records from being inserted. 2) In practical applications, by optimizing query and adjusting isolation levels, lock competition can be reduced and concurrency performance can be improved.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment