search
HomeBackend DevelopmentPython TutorialHow to create a LinkedIn job scraper in Python with Crawlee

導入

この記事では、Crawlee と Streamlit を使用して、LinkedIn をスクレイピングして求人情報を取得する Web アプリケーションを構築します。

Crawlee for Python を使用して、Python で LinkedIn 求人スクレーパーを作成し、Web アプリケーションを通じて動的に受信したユーザー入力から会社名、役職、投稿時刻、求人情報へのリンクを抽出します。

注意
私たちのコミュニティ メンバーの 1 人が、Crawlee Blog への寄稿としてこのブログを書きました。このようなブログを Crawlee Blog に投稿したい場合は、Discord チャンネルまでご連絡ください。

このチュートリアルが終わるまでに、LinkedIn から求人情報を取得するために使用できる、完全に機能する Web アプリケーションが完成します。

How to create a LinkedIn job scraper in Python with Crawlee

始めましょう。


前提条件

次のコマンドを使用して、新しい Crawlee for Python プロジェクトを作成することから始めましょう:

pipx run crawlee create linkedin-scraper

Crawlee が要求した場合は、ターミナルで PlaywrightCrawler を選択します。

インストール後、Crawlee for Python は定型コードを作成します。ディレクトリ (cd) をプロジェクト フォルダーに変更し、このコマンドを実行して依存関係をインストールできます。

poetry install

スクレイパーを構築できるように、Crawlee から提供されたファイルの編集を開始します。

注意
このブログを読む前に、GitHub で Crawlee for Python にスターを付けていただければ大変嬉しく思います!

GitHub でスターを付けてください ⭐️

Crawlee を使用して Python で LinkedIn ジョブ Scraper を構築する

このセクションでは、Crawlee for Python パッケージを使用してスクレイパーを構築します。 Crawlee について詳しくは、ドキュメントを参照してください。

1. LinkedIn 求人検索ページの検査

Web ブラウザで LinkedIn を開き、Web サイトからサインアウトします (アカウントがすでにログインしている場合)。次のようなインターフェイスが表示されるはずです。

How to create a LinkedIn job scraper in Python with Crawlee

求人セクションに移動し、希望の仕事と勤務地を検索して、URL をコピーします。

How to create a LinkedIn job scraper in Python with Crawlee

次のようなものになるはずです:

https://www.linkedin.com/jobs/search?keywords=バックエンド開発者&location=Canada&geoId=101174742&trk=public_jobs_jobs-search-bar_search-submit&position=1&pageNum=0

「?」の後の部分である検索パラメーターに焦点を当てます。私たちにとって、キーワードと場所のパラメーターが最も重要です。

ユーザーが指定した役職名はキーワード パラメーターに入力され、ユーザーが指定した場所は場所パラメーターに入力されます。最後に、geoId パラメータは削除されますが、他のパラメータは一定のままです。

main.py ファイルに変更を加えます。以下のコードをコピーして、main.py ファイルに貼り付けます。

from crawlee.playwright_crawler import PlaywrightCrawler
from .routes import router                                     
import urllib.parse

async def main(title: str, location: str, data_name: str) -> None:
    base_url = "https://www.linkedin.com/jobs/search"

    # URL encode the parameters
    params = {
        "keywords": title,
        "location": location,
        "trk": "public_jobs_jobs-search-bar_search-submit",
        "position": "1",
        "pageNum": "0"
    }

    encoded_params = urlencode(params)

    # Encode parameters into a query string
    query_string = '?' + encoded_params

    # Combine base URL with the encoded query string
    encoded_url = urljoin(base_url, "") + query_string

    # Initialize the crawler
    crawler = PlaywrightCrawler(
        request_handler=router,
    )

    # Run the crawler with the initial list of URLs
    await crawler.run([encoded_url])

    # Save the data in a CSV file
    output_file = f"{data_name}.csv"
    await crawler.export_data(output_file)

URL をエンコードしたので、次のステップは、生成されたルーターを調整して LinkedIn の求人情報を処理することです。

2. クローラーのルーティング

アプリケーションには 2 つのハンドラーを使用します。

  • デフォルトハンドラー

default_handler は開始 URL を処理します

  • 求人情報

job_listing ハンドラーは、個々のジョブの詳細を抽出します。

Playwright クローラーは、求人ページを巡回して、ページ上のすべての求人へのリンクを抽出します。

How to create a LinkedIn job scraper in Python with Crawlee

求人情報を調べると、求人情報のリンクがjobs-search__results-listというクラスの順序付きリスト内にあることがわかります。次に、Playwright ロケーター オブジェクトを使用してリンクを抽出し、処理のために job_listing ルートに追加します。

router = Router[PlaywrightCrawlingContext]()

@router.default_handler
async def default_handler(context: PlaywrightCrawlingContext) -> None:
    """Default request handler."""

    #select all the links for the job posting on the page
    hrefs = await context.page.locator('ul.jobs-search__results-list a').evaluate_all("links => links.map(link => link.href)")

    #add all the links to the job listing route
    await context.add_requests(
            [Request.from_url(rec, label='job_listing') for rec in hrefs]
        )

求人情報を取得したので、次のステップは詳細を収集することです。

各求人のタイトル、会社名、投稿時刻、求人投稿へのリンクを抽出します。開発ツールを開き、CSS セレクターを使用して各要素を抽出します。

How to create a LinkedIn job scraper in Python with Crawlee

各リストをスクレイピングした後、テキストから特殊文字を削除してクリーンにし、context.push_data 関数を使用してデータをローカル ストレージにプッシュします。

@router.handler('job_listing')
async def listing_handler(context: PlaywrightCrawlingContext) -> None:
    """Handler for job listings."""

    await context.page.wait_for_load_state('load')

    job_title = await context.page.locator('div.top-card-layout__entity-info h1.top-card-layout__title').text_content()

    company_name  = await context.page.locator('span.topcard__flavor a').text_content()   

    time_of_posting= await context.page.locator('div.topcard__flavor-row span.posted-time-ago__text').text_content()


    await context.push_data(
        {
            # we are making use of regex to remove special characters for the extracted texts

            'title': re.sub(r'[\s\n]+', '', job_title),
            'Company name': re.sub(r'[\s\n]+', '', company_name),
            'Time of posting': re.sub(r'[\s\n]+', '', time_of_posting),
            'url': context.request.loaded_url,
        }
    )

3. Creating your application

For this project, we will be using Streamlit for the web application. Before we proceed, we are going to create a new file named app.py in your project directory. In addition, ensure you have Streamlit installed in your global Python environment before proceeding with this section.

import streamlit as st
import subprocess

# Streamlit form for inputs 
st.title("LinkedIn Job Scraper")

with st.form("scraper_form"):
    title = st.text_input("Job Title", value="backend developer")
    location = st.text_input("Job Location", value="newyork")
    data_name = st.text_input("Output File Name", value="backend_jobs")

    submit_button = st.form_submit_button("Run Scraper")

if submit_button:

    # Run the scraping script with the form inputs
    command = f"""poetry run python -m linkedin-scraper --title "{title}"  --location "{location}" --data_name "{data_name}" """

    with st.spinner("Crawling in progress..."):
         # Execute the command and display the results
        result = subprocess.run(command, shell=True, capture_output=True, text=True)

        st.write("Script Output:")
        st.text(result.stdout)

        if result.returncode == 0:
            st.success(f"Data successfully saved in {data_name}.csv")
        else:
            st.error(f"Error: {result.stderr}")

The Streamlit web application takes in the user's input and uses the Python Subprocess package to run the Crawlee scraping script.

4. Testing your app

Before we test the application, we need to make a little modification to the __main__ file in order for it to accommodate the command line arguments.

import asyncio
import argparse

from .main import main

def get_args():
    # ArgumentParser object to capture command-line arguments
    parser = argparse.ArgumentParser(description="Crawl LinkedIn job listings")


    # Define the arguments
    parser.add_argument("--title", type=str, required=True, help="Job title")
    parser.add_argument("--location", type=str, required=True, help="Job location")
    parser.add_argument("--data_name", type=str, required=True, help="Name for the output CSV file")


    # Parse the arguments
    return parser.parse_args()

if __name__ == '__main__':
    args = get_args()
    # Run the main function with the parsed command-line arguments
    asyncio.run(main(args.title, args.location, args.data_name))

We will start the Streamlit application by running this code in the terminal:

streamlit run app.py

This is what your application what the application should look like on the browser:

How to create a LinkedIn job scraper in Python with Crawlee

You will get this interface showing you that the scraping has been completed:

How to create a LinkedIn job scraper in Python with Crawlee

To access the scraped data, go over to your project directory and open the CSV file.

How to create a LinkedIn job scraper in Python with Crawlee

You should have something like this as the output of your CSV file.

Conclusion

In this tutorial, we have learned how to build an application that can scrape job posting data from LinkedIn using Crawlee. Have fun building great scraping applications with Crawlee.

You can find the complete working Crawler code here on the GitHub repository..

Follow Crawlee for more content like this.

How to create a LinkedIn job scraper in Python with Crawlee

Crawlee

Crawlee is a web scraping and browser automation library. It helps you build reliable crawlers. Fast.

Thank you!

The above is the detailed content of How to create a LinkedIn job scraper in Python with Crawlee. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Python vs. C  : Learning Curves and Ease of UsePython vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

Python vs. C  : Memory Management and ControlPython vs. C : Memory Management and ControlApr 19, 2025 am 12:17 AM

Python and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.

Python for Scientific Computing: A Detailed LookPython for Scientific Computing: A Detailed LookApr 19, 2025 am 12:15 AM

Python's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.

Python and C  : Finding the Right ToolPython and C : Finding the Right ToolApr 19, 2025 am 12:04 AM

Whether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.

Python for Data Science and Machine LearningPython for Data Science and Machine LearningApr 19, 2025 am 12:02 AM

Python is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.

Learning Python: Is 2 Hours of Daily Study Sufficient?Learning Python: Is 2 Hours of Daily Study Sufficient?Apr 18, 2025 am 12:22 AM

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Python for Web Development: Key ApplicationsPython for Web Development: Key ApplicationsApr 18, 2025 am 12:20 AM

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code

Python vs. C  : Exploring Performance and EfficiencyPython vs. C : Exploring Performance and EfficiencyApr 18, 2025 am 12:20 AM

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment