Saved the Day - Handling Large Images in the Browser" />
A client uploaded 28 images, each around 5800 x 9500 pixels and 28 MB in size. When attempting to display these images in the browser, the entire tab froze - even refusing to close - for a solid 10 minutes. And this was on a high-end machine with a Ryzen 9 CPU and an RTX 4080 GPU, not a tiny laptop.
Here’s The Situation
In our CMS (Grace Web), when an admin uploads a file - be it a picture or a video - we display it immediately after the upload finishes. It's great for UX - it confirms the upload was successful, they uploaded what they intended to, and it shows responsiveness of the UI for better feel.
But with these massive images, the browser could barely display them and froze up.
It's important to note that the issue wasn't with the upload itself but with displaying these large images. Rendering 28 huge images simultaneously and one by one was just too much. Even on a blank page, it's just... nope.
Why Did This Happen (Deep Dive)?
Anytime you display a picture in an tag, the browser goes through an entire process that employs the CPU, RAM, GPU, and sometimes the drive - sometimes even cycling through this process multiple times before the picture is displayed during image loading and layout recalculations. It goes through calculating pixels, transferring data among hardware, interpolating algorithms, etc. For a browser, it’s a significant process.
The Brainstormed Solutions
We considered several options:
-
Resize Immediately After Upload and Display Thumbnail
- Issue: Resizing could take too long and isn't guaranteed to be instantaneous, especially if the resizing queue isn't empty - results in not-a-responsive system.
-
Check File Sizes and Display Only Their Names for Large Files
- Issue: This would detract from the user experience, especially for those who require greater control and checks of any media material (incidentally, those are often the same ones with large files).
Neither of these solutions felt usable. We didn't want to limit our admins' ability to upload whatever they needed, nor did we want to compromise on the user experience.
Lightbulb! Using
I recalled that the
Testing the Theory
We decided to replace the elements with
const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); const img = new Image(); // img.src = URL.createObjectURL(file); // Use this if you're working with uploaded files img.src = filesroot + data.file_path; // Our existing file path img.onload = function() { // Resize canvas to desired size (for preview) canvas.width = 80 * (img.width / img.height); // Adjust scaling as needed canvas.height = 80; // Draw the image onto the canvas, resizing it in the process ctx.drawImage(img, 0, 0, canvas.width, canvas.height); file_div.innerHTML = ''; // Prepared <div> in UI to display the thumbnail file_div.appendChild(canvas); }; <h3> The Results </h3> <p>The improvement was immediate and dramatic! Not only did the browser handle the 28 large images without any hiccups, but we also pushed it further by loading <strong>120 MB images</strong> measuring <strong>28,000 x 17,000 pixels</strong>, and it still worked as if it was a tiny icon.</p> <h3> Where Is the Difference? </h3> <p>Using the <canvas> element, the drawing is entirely in the hands of the GPU, and those chips are literally made for this task.</canvas></p> <ul> <li> <img alt="How Saved the Day - Handling Large Images in the Browser" > still attempts to render 28,000 * 17,000 = 476,000,000 pixels.</li> <li>This <canvas> draws only 80 * 80 = 6,400 pixels.</canvas> </li> </ul> <p>By resizing the image within the <canvas> to a small preview size, we're drastically reducing the amount of data the browser needs to process - send hundreds of millions of pixels to GPU and get back thousands, a great deal, right?</canvas></p> <h2> Next Steps </h2> <p>Given this success, we're now considering replacing <img alt="How Saved the Day - Handling Large Images in the Browser" > with <canvas> in other parts of our application, particularly in admin screens that display around 250 images at once. While these images are already properly resized, we're curious to see if using <canvas> can offer even better performance.</canvas></canvas></p> <p>We'll be conducting testing and benchmarking to measure any performance gains. I'll be sure to share our findings in a future article.</p> <h2> Important Caveats </h2> <p>While using <canvas> worked wonders for our specific use case, I must note that you <strong>should not use <canvas> for displaying images on public-facing parts of websites</canvas></strong>. Here's why:</canvas></p> <ul> <li> <strong>Accessibility</strong>: <img alt="How Saved the Day - Handling Large Images in the Browser" > elements are readable by screen readers, aiding visually impaired users.</li> <li> <strong>SEO</strong>: Search engines index images from <img alt="How Saved the Day - Handling Large Images in the Browser" > tags, which can improve your site's visibility. (In the same way, AI-training scrapers "index" the images, too. Well, what can you do...)</li> <li> <img alt="How Saved the Day - Handling Large Images in the Browser" > with <picture> can be used to automatically detect pixel density and show bigger or smaller pictures for a perfect fit—<canvas> can’t. I wrote an article on how to build your .</canvas></picture> </li> </ul> <p>For public websites, it's best to properly resize images on the server side before serving them to users. If you're interested in how to automate image resizing and optimization, I wrote an article on that topic you might find useful.</p> <h2> Conclusion </h2> <p><strong>I love optimization!</strong> Even though we are moving towards local high-performance hardware each day, by optimizing - even when "it's good, but can be better" - we could prevent issues like this from the beginning.</p> <p>This approach is not standard. And I don’t care for standards very much. They are often treated as hard rules, but they are just guidelines. This solution fits. We do not have clients with ancient devices without solid <canvas> support. So, even though now we have more code to manage, multiple ways of displaying pictures, it’s a perfect fit, and that’s what matters!</canvas></p> <p><strong>Do you have some similar experiences? How did you handle it? Feel free to share your experiences or ask questions in the comments below!</strong></p> </div>
The above is the detailed content of How Saved the Day - Handling Large Images in the Browser. For more information, please follow other related articles on the PHP Chinese website!

Python and JavaScript have their own advantages and disadvantages in terms of community, libraries and resources. 1) The Python community is friendly and suitable for beginners, but the front-end development resources are not as rich as JavaScript. 2) Python is powerful in data science and machine learning libraries, while JavaScript is better in front-end development libraries and frameworks. 3) Both have rich learning resources, but Python is suitable for starting with official documents, while JavaScript is better with MDNWebDocs. The choice should be based on project needs and personal interests.

The shift from C/C to JavaScript requires adapting to dynamic typing, garbage collection and asynchronous programming. 1) C/C is a statically typed language that requires manual memory management, while JavaScript is dynamically typed and garbage collection is automatically processed. 2) C/C needs to be compiled into machine code, while JavaScript is an interpreted language. 3) JavaScript introduces concepts such as closures, prototype chains and Promise, which enhances flexibility and asynchronous programming capabilities.

Different JavaScript engines have different effects when parsing and executing JavaScript code, because the implementation principles and optimization strategies of each engine differ. 1. Lexical analysis: convert source code into lexical unit. 2. Grammar analysis: Generate an abstract syntax tree. 3. Optimization and compilation: Generate machine code through the JIT compiler. 4. Execute: Run the machine code. V8 engine optimizes through instant compilation and hidden class, SpiderMonkey uses a type inference system, resulting in different performance performance on the same code.

JavaScript's applications in the real world include server-side programming, mobile application development and Internet of Things control: 1. Server-side programming is realized through Node.js, suitable for high concurrent request processing. 2. Mobile application development is carried out through ReactNative and supports cross-platform deployment. 3. Used for IoT device control through Johnny-Five library, suitable for hardware interaction.

I built a functional multi-tenant SaaS application (an EdTech app) with your everyday tech tool and you can do the same. First, what’s a multi-tenant SaaS application? Multi-tenant SaaS applications let you serve multiple customers from a sing

This article demonstrates frontend integration with a backend secured by Permit, building a functional EdTech SaaS application using Next.js. The frontend fetches user permissions to control UI visibility and ensures API requests adhere to role-base

JavaScript is the core language of modern web development and is widely used for its diversity and flexibility. 1) Front-end development: build dynamic web pages and single-page applications through DOM operations and modern frameworks (such as React, Vue.js, Angular). 2) Server-side development: Node.js uses a non-blocking I/O model to handle high concurrency and real-time applications. 3) Mobile and desktop application development: cross-platform development is realized through ReactNative and Electron to improve development efficiency.

The latest trends in JavaScript include the rise of TypeScript, the popularity of modern frameworks and libraries, and the application of WebAssembly. Future prospects cover more powerful type systems, the development of server-side JavaScript, the expansion of artificial intelligence and machine learning, and the potential of IoT and edge computing.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.