Home >Technology peripherals >AI >Building User Interfaces For AI Applications with Gradio in Python
The usefulness of today’s AI models is greatly diminished without accessible user interfaces. Using Gradio, an open-source Python web UI library, you can bridge that gap between LLMs and non-technical end users. It allows you to create rapid prototypes for your AI projects and simplify their deployment to a wider audience.
This tutorial is aimed at machine learning engineers who typically don’t have any web development experience. It covers the Gradio basics and core concepts, interface creation for various AI model types, advanced features for UX and interactivity, and deployment and sharing best practices.
Let’s get started.
We will get started by creating a virtual environment (preferably Conda):
$ conda create -n gradio_tutorial python=3.9 -y $ conda activate gradio_tutorial
Then, you can use PIP to install Gradio and its dependencies:
$ pip install gradio ipykernel
We’ve also installed the ipykernel package so that we can display Gradio interfaces straight within Jupyter notebooks. This process requires you to add the virtual environment you created as a kernel to Jupyter Lab. Here is the command to do it:
$ ipython kernel install --user --name=gradio_tutorial $ jupyter lab # Start the lab
This should allow you to create a notebook with a kernel that has Gradio installed. To verify, import it under its standard alias and print its version:
import gradio as gr print(gr.__version__) 4.37.1
We will dive into Gradio by learning its key concepts and terminology through a “Hello World” example:
def greet(name): return f"Hello, {name}!" demo = gr.Interface( fn=greet, inputs=['text'], outputs="text", ) demo.launch()
When you run the above code in a cell, the output will be a small interactive interface that returns a custom greeting message:
Gradio revolves around a few key concepts:
Above, we created a greet function that takes and returns a text input. For this reason, the input and output components are specified as text inside the Interface class.
In the end, we are calling the launch method, which starts a local server. To make the UI available to anyone, you can set the share parameter to True. This will start an SSH tunnel and deploy the Gradio app to a publicly shareable webpage:
demo.launch(share=True) Running on public URL: https://d638ed5f2ce0044296.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces)
You will spend most of your time tinkering around different components and how to place them on the page while building Gradio apps. So, let’s take a closer look at what you have at your disposal.
Gradio offers a wide array of components for building interactive interfaces. These components are generally divided into two categories: input and output.
Input components allow users to provide data to the underlying processor (this can be any Python function). Some common inputs are:
Here is a dummy interface that uses some of the components above:
$ conda create -n gradio_tutorial python=3.9 -y $ conda activate gradio_tutorial
In this example, the process_inputs function requires five parameters. Therefore, we need to create five input components and pass them to inputs. While the number of input components should match the number of required function parameters, this is not a strict rule. To avoid errors and warnings, set default values for parameters that do not require user input from the UI.
Notice how we are using a Textbox class to specify the input component instead of a plain string text like the first example. It is always recommended to use dedicated classes to specify input and output components to make them customizable. For example, all component classes have a useful label attribute, while Slider and Dropdown have arguments to specify the range and available options.
Many input components can be used to display output as well. Here are some common scenarios:
Like inputs, the number of output components must match the number of returned values from the processing function.
Gradio allows you to customize the appearance of your components to suit your needs. Here is an example that uses customized text boxes:
$ pip install gradio ipykernel
In this example, we’ve customized the Textbox components by specifying the number of lines, adding a placeholder and info text, and including a copy button for the output.
Experiment with different components and their properties to create interfaces that best suit your AI application’s requirements. To find out what kind of properties you can change for your component, you can visit its docs, or better yet, use the ? operand in Jupyter Lab after its class name:
Let’s put everything we’ve learned together by creating two real-world text and image-based interfaces that are powered by LLMs.
First, we will build a language translator from English to Turkish, Spanish, or Chinese:
$ conda create -n gradio_tutorial python=3.9 -y $ conda activate gradio_tutorial
Initially, we define a translate_text function. In its body, we set the OpenAI API key and create a language map. Then, we construct the prompt for translation. Then, inside a try-except block, we send a request to the ChatCompletion endpoint with a system prompt. In the end, we return the first choice.
Now, we can build the interface:
$ pip install gradio ipykernel
The code is simple, like those of earlier interfaces, but we are introducing a couple of new properties:
Here is the result:
You might wonder why we are asking user’s API key as part of the app rather than provide it ourselves. The reason has got to do with how Gradio deploys UIs.
If we provided our own API key as an environment variable (which is standard practice), the publicly shareable app version wouldn’t work as it wouldn’t have access to our environment variables. In the deployment section, we will see how to fix this by deploying our apps to HuggingFace spaces.
Let’s build another UI for generating images:
$ ipython kernel install --user --name=gradio_tutorial $ jupyter lab # Start the lab
We create a function named generate_surrealist_art that sends a request to dall-e-3 and returns the generated image URL using a surrealist prompt. Then, we will feed this function into an Interface class again:
import gradio as gr print(gr.__version__) 4.37.1
We specify two inputs for the API key and the concept we want to capture in a surrealist image. Then, we create one output component for the generated image with the Image class. If you set its value argument to str, the component can download and display images from URLs, which is just what we need.
And here is the result:
Now, let’s build an interface for a classic tabular regression model. We will use the Diamonds dataset, which is available in Seaborn.
Start by creating a new working directory and a new script named app.py inside. Then, paste the code from this GitHub gist that loads the data, processes it using a Scikit-learn Pipeline and trains a RandomForestRegression model.
The next step is to create a processing function that accepts the same number of inputs as there are features in the Diamonds dataset:
$ conda create -n gradio_tutorial python=3.9 -y $ conda activate gradio_tutorial
The function converts those inputs into a DataFrame and passes it to the .predict() method of the trained model pipeline. In the end, it returns a string with the predicted price.
Now, the Interface class must match this function's signature: nine input components for processing the features and one output for displaying the predicted price:
$ pip install gradio ipykernel
Inside the class, we create three dropdowns for the categorical features. The options are filled in with the unique categories in each feature. We also create six slider components to accept numeric features. The ranges of sliders are determined by the minimum and maximum values of each feature.
All we have to do now is execute the script to run and deploy the app:
$ ipython kernel install --user --name=gradio_tutorial $ jupyter lab # Start the lab
Here is the result:
For best practices and optimization tips, skip to the Best Practices section below.
We’ve already seen how easy it is to deploy Gradio apps by enabling a single argument. Of course, the disadvantage of this method is that the demos expire within 72 hours. So, the recommended method of deploying Gradio is through HuggingFace Spaces. HuggingFace acquired Gradio in 2021, making the integration between the two platforms seamless.
So, for this tutorial or any future apps you create with Gradio, sign up for a free account at huggingface.co and navigate to Settings > Tokens to generate an access token:
The token is displayed only once, so be sure to store it somewhere safe.
With this token, you can deploy as many Gradio apps as you want with permanent hosting on Spaces. As an example, we will deploy the Diamond Prices Prediction model from the previous section, and you will find it surprisingly easy.
All you have to do is navigate to the directory with the UI script and call gradio deploy on the terminal:
The terminal walks you through converting your script to a functioning HuggingFace Space. It asks for details such as:
And the terminal presents you with a deployed Space link. Here is what it looks like:
Another great thing about this method of deployment is that Gradio automatically converts the demo to a working REST API. The instructions to access and query it are always located at the bottom:
So, in one go, you have both permanent UI hosting for your application for non-technical users and a REST API for your colleagues and developer friends.
For more deployment and sharing options, such as embedding demos into webpages, adding Google authentication to apps, etc., visit the “Sharing Your App” section of Gradio’s documentation.
When developing user interfaces with Gradio, following best practices can significantly improve the user experience and maintainability of your application. Here are some key recommendations:
Organize Gradio applications in Python scripts for better version control, collaboration, and deployment.
Use appropriate sizing and layout tools (e.g., gr.Column(), gr.Row()) to ensure a balanced, responsive interface.
Utilize ‘info’ and ‘label’ attributes to give clear instructions and context for each component.
For models with many features, use file inputs (CSV, JSON) to enable batch predictions and simplify the interface.
Use python-dotenv for local development and set variables in Hugging Face Spaces for deployment.
Validate inputs, provide clear error messages, and use try-except blocks for graceful error handling.
Implement caching, lazy loading for large models, and use gr.LoadingStatus() for long-running tasks.
Ensure high contrast, provide alt text for images, and enable keyboard navigation for all interactive elements.
Use accordions or tabs to organize complex interfaces, revealing advanced options as needed.
Keep dependencies updated, monitor for bugs, and continuously improve based on user feedback.
Utilize HuggingFace tools and resources for seamless integration with Gradio, including model repositories and datasets.
For large tabular models, upload to HuggingFace Hub and load directly in your Gradio script to improve performance and reduce local storage requirements.
For large datasets, upload to HuggingFace Hub and access them directly in your Gradio application to streamline data management and improve loading times.
In this article, we have learned the basics of building user interfaces for AI applications using Gradio. We have just dipped below the surface as Gradio offers many more features for building complex interfaces. For example, interface state allows your app to remember outputs from one function call to another. Reactive interfaces dynamically change the UI as soon as the user input changes. With Blocks, you can build apps with custom layouts and designs.
Similarly, check out these related resources for more content:
The above is the detailed content of Building User Interfaces For AI Applications with Gradio in Python. For more information, please follow other related articles on the PHP Chinese website!