首页  >  文章  >  后端开发  >  基于设计原则的重构:数据采集爬虫系统示例

基于设计原则的重构:数据采集爬虫系统示例

WBOY
WBOY原创
2024-07-20 07:38:49731浏览

Refactoring based on design principles: example of a data collection crawler system

介绍

提高代码质量始终是软件开发中的一个重要问题。在本文中,我们以数据收集爬虫系统为例,具体讲解如何通过逐步重构来应用设计原则和最佳实践。

改进前的代码

首先,我们从一个非常简单的网络抓取工具开始,将所有功能集成到一个类中。

由 DeepL.com 翻译(免费版)

project_root/
├── web_scraper.py
├── main.py
└── requirements.txt

web_scraper.py

import requests
import json
import sqlite3

class WebScraper:
    def __init__(self, url):
        self.url = url

    def fetch_data(self):
        response = requests.get(self.url)
        data = response.text
        parsed_data = self.parse_data(data)
        enriched_data = self.enrich_data(parsed_data)
        self.save_data(enriched_data)
        return enriched_data

    def parse_data(self, data):
        return json.loads(data)

    def enrich_data(self, data):
        # Apply business logic here
        # Example: extract only data containing specific keywords
        return {k: v for k, v in data.items() if 'important' in v.lower()}

    def save_data(self, data):
        conn = sqlite3.connect('test.db')
        cursor = conn.cursor()
        cursor.execute('INSERT INTO data (json_data) VALUES (?)', (json.dumps(data),))
        conn.commit()
        conn.close()

main.py

from web_scraper import WebScraper

def main():
    scraper = WebScraper('https://example.com/api/data')
    data = scraper.fetch_data()
    print(data)

if __name__ == "__main__":
    main()

需要改进的地方

  1. 违反了单一职责原则:一个类负责所有数据采集、分析、丰富和存储
  2. 业务逻辑不清晰:业务逻辑嵌入在enrich_data方法中,但与其他处理混合
  3. 缺乏可重用性:函数紧密耦合,使得单独重用变得困难
  4. 测试难点:难以独立测试各个功能
  5. 配置刚性:数据库路径和其他设置直接嵌入代码中

重构阶段

1.职责分离:数据采集、分析、存储分离

  • 重大变化:将数据采集、分析和存储的职责分离到不同的类中
  • 目标:应用单一责任原则,引入环境变量

目录结构

project_root/
├── data_fetcher.py
├── data_parser.py
├── data_saver.py
├── data_enricher.py
├── web_scraper.py
├── main.py
└── requirements.txt

data_enricher.py

class DataEnricher:
    def enrich(self, data):
        return {k: v for k, v in data.items() if 'important' in v.lower()}

web_scraper.py

from data_fetcher import DataFetcher
from data_parser import DataParser
from data_enricher import DataEnricher
from data_saver import DataSaver

class WebScraper:
    def __init__(self, url):
        self.url = url
        self.fetcher = DataFetcher()
        self.parser = DataParser()
        self.enricher = DataEnricher()
        self.saver = DataSaver()

    def fetch_data(self):
        raw_data = self.fetcher.fetch(self.url)
        parsed_data = self.parser.parse(raw_data)
        enriched_data = self.enricher.enrich(parsed_data)
        self.saver.save(enriched_data)
        return enriched_data

此更改明确了每个类的职责并提高了可重用性和可测试性。然而,业务逻辑仍然嵌入在 DataEnricher 类中。

2.接口介绍和依赖注入

  • 主要变化:引入接口并实现依赖注入。
  • 目的:增加灵活性和可扩展性,扩展环境变量,抽象业务逻辑

目录结构

project_root/
├── interfaces/
│   ├── __init__.py
│   ├── data_fetcher_interface.py
│   ├── data_parser_interface.py
│   ├── data_enricher_interface.py
│   └── data_saver_interface.py
├── implementations/
│   ├── __init__.py
│   ├── http_data_fetcher.py
│   ├── json_data_parser.py
│   ├── keyword_data_enricher.py
│   └── sqlite_data_saver.py
├── web_scraper.py
├── main.py
└── requirements.txt

接口/data_fetcher_interface.py

from abc import ABC, abstractmethod

class DataFetcherInterface(ABC):
    @abstractmethod
    def fetch(self, url: str) -> str:
        pass

接口/data_parser_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataParserInterface(ABC):
    @abstractmethod
    def parse(self, raw_data: str) -> Dict[str, Any]:
        pass

接口/data_enricher_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataEnricherInterface(ABC):
    @abstractmethod
    def enrich(self, data: Dict[str, Any]) -> Dict[str, Any]:
        pass

接口/data_saver_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataSaverInterface(ABC):
    @abstractmethod
    def save(self, data: Dict[str, Any]) -> None:
        pass

实现/keyword_data_enricher.py

import os
from interfaces.data_enricher_interface import DataEnricherInterface

class KeywordDataEnricher(DataEnricherInterface):
    def __init__(self):
        self.keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data):
        return {k: v for k, v in data.items() if self.keyword in str(v).lower()}

web_scraper.py

from interfaces.data_fetcher_interface import DataFetcherInterface
from interfaces.data_parser_interface import DataParserInterface
from interfaces.data_enricher_interface import DataEnricherInterface
from interfaces.data_saver_interface import DataSaverInterface

class WebScraper:
    def __init__(self, fetcher: DataFetcherInterface, parser: DataParserInterface, 
                 enricher: DataEnricherInterface, saver: DataSaverInterface):
        self.fetcher = fetcher
        self.parser = parser
        self.enricher = enricher
        self.saver = saver

    def fetch_data(self, url):
        raw_data = self.fetcher.fetch(url)
        parsed_data = self.parser.parse(raw_data)
        enriched_data = self.enricher.enrich(parsed_data)
        self.saver.save(enriched_data)
        return enriched_data

现阶段主要变化有

  1. 引入一个接口以方便切换到不同的实现
  2. 依赖注入使WebScraper类更加灵活
  3. fetch_data 方法已更改为以 url 作为参数,使 URL 规范更加灵活。
  4. 业务逻辑已被抽象为 DataEnricherInterface 并实现为 KeywordDataEnricher。
  5. 通过允许使用环境变量设置关键字,业务逻辑变得更加灵活。

这些改变极大地提高了系统的灵活性和可扩展性。然而,业务逻辑仍然嵌入在 DataEnricherInterface 及其实现中。下一步就是进一步分离这个业务逻辑,并将其明确定义为领域层。

3.领域层的引入和业务逻辑的分离

上一步中,接口的引入增加了系统的灵活性。但是,业务逻辑(在本例中为数据重要性确定和过滤)仍然被视为数据层的一部分。基于领域驱动设计的理念,将此业务逻辑视为系统的中心概念,并将其实现为独立的领域层,可以带来以下好处

  1. 业务逻辑集中管理
  2. 通过领域模型更具表现力的代码
  3. 更改业务规则具有更大的灵活性
  4. 易于测试

更新的目录结构:

project_root/
├── domain/
│   ├── __init__.py
│   ├── scraped_data.py
│   └── data_enrichment_service.py
├── data/
│   ├── __init__.py
│   ├── interfaces/
│   │   ├── __init__.py
│   │   ├── data_fetcher_interface.py
│   │   ├── data_parser_interface.py
│   │   └── data_saver_interface.py
│   ├── implementations/
│   │   ├── __init__.py
│   │   ├── http_data_fetcher.py
│   │   ├── json_data_parser.py
│   │   └── sqlite_data_saver.py
├── application/
│   ├── __init__.py
│   └── web_scraper.py
├── main.py
└── requirements.txt

现阶段,DataEnricherInterface 和 KeywordDataEnricher 的角色将转移到领域层的 ScrapedData 模型和 DataEnrichmentService 中。下面提供了此更改的详细信息。

更改前(第 2 部分)

class DataEnricherInterface(ABC):
    @abstractmethod
    def enrich(self, data: Dict[str, Any]) -> Dict[str, Any]:
        pass
class KeywordDataEnricher(DataEnricherInterface):
    def __init__(self):
        self.keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data):
        return {k: v for k, v in data.items() if self.keyword in str(v).lower()}

修改后(第 3 部分)

@dataclass
class ScrapedData:
    content: Dict[str, Any]
    source_url: str

    def is_important(self) -> bool:
        important_keyword = os.getenv('IMPORTANT_KEYWORD', 'important')
        return any(important_keyword in str(v).lower() for v in self.content.values())
class DataEnrichmentService:
    def __init__(self):
        self.important_keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data: ScrapedData) -> ScrapedData:
        if data.is_important():
            enriched_content = {k: v for k, v in data.content.items() if self.important_keyword in str(v).lower()}
            return ScrapedData(content=enriched_content, source_url=data.source_url)
        return data

此更改改进了以下内容。

  1. 业务逻辑已移至域层,消除了对 DataEnricherInterface 的需求。

  2. the KeywordDataEnricher functionality has been merged into the DataEnrichmentService, centralizing the business logic in one place.

  3. The is_important method has been added to the ScrapedData model. This makes the domain model itself responsible for determining the importance of data and makes the domain concept clearer.

  4. DataEnrichmentService now handles ScrapedData objects directly, improving type safety.

The WebScraper class will also be updated to reflect this change.

from data.interfaces.data_fetcher_interface import DataFetcherInterface
from data.interfaces.data_parser_interface import DataParserInterface
from data.interfaces.data_saver_interface import DataSaverInterface
from domain.scraped_data import ScrapedData
from domain.data_enrichment_service import DataEnrichmentService

class WebScraper:
    def __init__(self, fetcher: DataFetcherInterface, parser: DataParserInterface, 
                 saver: DataSaverInterface, enrichment_service: DataEnrichmentService):
        self.fetcher = fetcher
        self.parser = parser
        self.saver = saver
        self.enrichment_service = enrichment_service

    def fetch_data(self, url: str) -> ScrapedData:
        raw_data = self.fetcher.fetch(url)
        parsed_data = self.parser.parse(raw_data)
        scraped_data = ScrapedData(content=parsed_data, source_url=url)
        enriched_data = self.enrichment_service.enrich(scraped_data)
        self.saver.save(enriched_data)
        return enriched_data

This change completely shifts the business logic from the data layer to the domain layer, giving the system a clearer structure. The removal of the DataEnricherInterface and the introduction of the DataEnrichmentService are not just interface replacements, but fundamental changes in the way business logic is handled.

Summary

This article has demonstrated how to improve code quality and apply design principles specifically through a step-by-step refactoring process for the data collection crawler system. The main areas of improvement are as follows.

  1. Separation of Responsibility: Applying the principle of single responsibility, we separated data acquisition, parsing, enrichment, and storage into separate classes.
  2. Introduction of interfaces and dependency injection: greatly increased the flexibility and scalability of the system, making it easier to switch to different implementations.
  3. Introduction of domain model and services: clearly separated the business logic and defined the core concepts of the system.
  4. Adoption of Layered Architecture: Clearly separated the domain, data, and application layers and defined the responsibilities of each layer. 5.Maintain interfaces: Maintained abstraction at the data layer to ensure flexibility in implementation.

These improvements have greatly enhanced the system's modularity, reusability, testability, maintainability, and scalability. In particular, by applying some concepts of domain-driven design, the business logic became clearer and the structure was more flexible to accommodate future changes in requirements. At the same time, by maintaining the interfaces, we ensured the flexibility to easily change and extend the data layer implementation.

It is important to note that this refactoring process is not a one-time event, but part of a continuous improvement process. Depending on the size and complexity of the project, it is important to adopt design principles and DDD concepts at the appropriate level and to make incremental improvements.

Finally, the approach presented in this article can be applied to a wide variety of software projects, not just data collection crawlers. We encourage you to use them as a reference as you work to improve code quality and design.

以上是基于设计原则的重构:数据采集爬虫系统示例的详细内容。更多信息请关注PHP中文网其他相关文章!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn