search
HomeBackend DevelopmentPython TutorialPython introduces nested JSON to transform into Dataframe in seconds!

Python introduces nested JSON to transform into Dataframe in seconds!

Dec 29, 2020 am 09:34 AM
dataframejsonpandaspythondata processing

Python tutorialThe column introduces how to nest JSON

Python introduces nested JSON to transform into Dataframe in seconds!

Recommended (free):Python Tutorial

Calling API and the document database will return a nested JSON object when we use When Python tries to convert the keys in the nested structure into columns, the data loaded into pandas will often get the following results:

df = pd.DataFrame.from_records(results [“ issues”],columns = [“ key”,“ fields”])
Explanation: The results here are a large dictionary , issues is a key in results, and the value of issues is a list of nested JSON object dictionaries. You will see the JSON nested structure later.

The problem is that the API returns a nested JSON structure, and the keys we care about are indeed at different levels in the object.

The nested JSON structure looks like this.

What we want is the following.

The following takes the data returned by an API as an example. The API usually contains metadata about the fields. Let's say these are the fields we want.

  • key: JSON key, at the first level.
  • summary: The second-level "field" object.
  • status name: Third level position.
  • statusCategory name: Located at the 4th nesting level.

As above, the fields we choose to extract are at 4 different nesting levels in the JSON structure in the issues list, one after another.

{
  "expand": "schema,names",
  "issues": [
    {
      "fields": {
        "issuetype": {
          "avatarId": 10300,
          "description": "",
          "id": "10005",
          "name": "New Feature",
          "subtask": False
        },
        "status": {
          "description": "A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.",
          "id": "5",
          "name": "Resolved",
          "statusCategory": {
            "colorName": "green",
            "id": 3,
            "key": "done",
            "name": "Done",
          }
        },
        "summary": "Recovered data collection Defraglar $MFT problem"
      },
      "id": "11861",
      "key": "CAE-160",
    },
    {
      "fields": { 
... more issues],
  "maxResults": 5,
  "startAt": 0,
  "total": 160
}

A not so good solution

One option is to code directly and write a function to find a specific field, but the problem is that each embedded field must be Call this function to set the field, and then call .apply to the new column in DataFrame.

In order to get the several fields we want, first we extract the objects in the fields key to the column:

df = (
    df["fields"]
    .apply(pd.Series)
    .merge(df, left_index=True, right_index = True)
)

As can be seen from the above table, only summary is available, issuetype, status, etc. Still buried in nested objects.

The following is a method to extract the name in issuetype.

# 提取issue type的name到一个新列叫"issue_type"
df_issue_type = (
    df["issuetype"]
    .apply(pd.Series)
    .rename(columns={"name": "issue_type_name"})["issue_type_name"]
)
df = df.assign(issue_type_name = df_issue_type)

Like the above, if there are too many nesting levels, you need to implement recursion yourself, because each level of nesting needs to call a method like the above to parse and add to a new column.

For friends with weak programming foundation, it is actually quite troublesome to learn one. Especially for data analysts, when they are anxious to use data, they hope to quickly get structured data for analysis.

Now Brother Dong shares a pandas built-in solution.

Built-in solution

pandas has an awesome built-in function called .json_normalize. The documentation of

pandas mentions: normalizing semi-structured JSON data into a flat table.

All the code in the previous solution can be completed using this built-in function in only 3 lines. The steps are very simple, just understand the following usage.

Determine the fields we want and use the . symbol to connect nested objects.

Put the nested list you want to process (here results["issues"]) as a parameter into .json_normalize.

Filter the FIELDS list we defined.

FIELDS = ["key", "fields.summary", "fields.issuetype.name", "fields.status.name", "fields.status.statusCategory.name"]
df = pd.json_normalize(results["issues"])
df[FIELDS]

Yes, it’s that simple.

Other operations

Record path

In addition to passing results["issues"]## like above #In addition to the list, we also use the record_path parameter to specify the path of the list in the JSON object.

# 使用路径而不是直接用results["issues"]
pd.json_normalize(results, record_path="issues")[FIELDS]

Custom delimiter

You can also use the sep parameter to customize the delimiter for nested structure connections, for example, replace the default "." with "-" below .

### 用 "-" 替换默认的 "."
FIELDS = ["key", "fields-summary", "fields-issuetype-name", "fields-status-name", "fields-status-statusCategory-name"]
pd.json_normalize(results["issues"], sep = "-")[FIELDS]

Control recursion

If you don’t want to recurse to each sub-object, you can use the

max_level parameter to control the depth. In this case, since the statusCategory.name field is at level 4 of the JSON object, it will not be included in the resulting DataFrame.

# 只深入到嵌套第二级
pd.json_normalize(results, record_path="issues", max_level = 2)
The following is the

pandas official document description of .json_normalize. If you don’t understand, you can learn it by yourself. This time Brother Dong will introduce it here.

pandas official documentation: https://pandas.pydata.org/pan...

The above is the detailed content of Python introduces nested JSON to transform into Dataframe in seconds!. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Reaching Your Python Goals: The Power of 2 Hours DailyReaching Your Python Goals: The Power of 2 Hours DailyApr 20, 2025 am 12:21 AM

By investing 2 hours of Python learning every day, you can effectively improve your programming skills. 1. Learn new knowledge: read documents or watch tutorials. 2. Practice: Write code and complete exercises. 3. Review: Consolidate the content you have learned. 4. Project practice: Apply what you have learned in actual projects. Such a structured learning plan can help you systematically master Python and achieve career goals.

Maximizing 2 Hours: Effective Python Learning StrategiesMaximizing 2 Hours: Effective Python Learning StrategiesApr 20, 2025 am 12:20 AM

Methods to learn Python efficiently within two hours include: 1. Review the basic knowledge and ensure that you are familiar with Python installation and basic syntax; 2. Understand the core concepts of Python, such as variables, lists, functions, etc.; 3. Master basic and advanced usage by using examples; 4. Learn common errors and debugging techniques; 5. Apply performance optimization and best practices, such as using list comprehensions and following the PEP8 style guide.

Choosing Between Python and C  : The Right Language for YouChoosing Between Python and C : The Right Language for YouApr 20, 2025 am 12:20 AM

Python is suitable for beginners and data science, and C is suitable for system programming and game development. 1. Python is simple and easy to use, suitable for data science and web development. 2.C provides high performance and control, suitable for game development and system programming. The choice should be based on project needs and personal interests.

Python vs. C  : A Comparative Analysis of Programming LanguagesPython vs. C : A Comparative Analysis of Programming LanguagesApr 20, 2025 am 12:14 AM

Python is more suitable for data science and rapid development, while C is more suitable for high performance and system programming. 1. Python syntax is concise and easy to learn, suitable for data processing and scientific computing. 2.C has complex syntax but excellent performance and is often used in game development and system programming.

2 Hours a Day: The Potential of Python Learning2 Hours a Day: The Potential of Python LearningApr 20, 2025 am 12:14 AM

It is feasible to invest two hours a day to learn Python. 1. Learn new knowledge: Learn new concepts in one hour, such as lists and dictionaries. 2. Practice and exercises: Use one hour to perform programming exercises, such as writing small programs. Through reasonable planning and perseverance, you can master the core concepts of Python in a short time.

Python vs. C  : Learning Curves and Ease of UsePython vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python vs. C  : Memory Management and ControlPython vs. C : Memory Management and ControlApr 19, 2025 am 12:17 AM

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python for Scientific Computing: A Detailed LookPython for Scientific Computing: A Detailed LookApr 19, 2025 am 12:15 AM

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft