


Introduction
A question which comes up very often for the users who access the watsonx.ai LLMs is “how do we set the sampling parameters?” !
Actually, it is quite easy.
Sampling Parameters (or generation parameters)
- Access your watsonx.ai instance.
- Click on “Open Prompt Lab”. Once in the prompt lab, in either tabs, click on the parameters icon (the icon on the far right as shown).
You can change the LLM which is set (the one used previously or the one set by default).
- Once the parameters dialog box is open, they could be set as necessary.
- Once the parameters set, on the same set of tools’ icons choose “view code >”.
The interface will provide 3 types of code embedding implementation of the parameters; Curl, Node.js and Python as the samples below.
curl "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29" \ -H 'Content-Type: application/json' \ -H 'Accept: application/json' \ -H "Authorization: Bearer ${YOUR_ACCESS_TOKEN}" \ -d '{ "input": "systemYou are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.\nassistant", "parameters": { "decoding_method": "sample", "max_new_tokens": 200, "min_new_tokens": 100, "random_seed": 42, "stop_sequences": [], "temperature": 0.7, "top_k": 50, "top_p": 1, "repetition_penalty": 1 }, "model_id": "ibm/granite-3-8b-instruct", "project_id": "the one you get" }'
export const generateText = async () => { const url = "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29"; const headers = { "Accept": "application/json", "Content-Type": "application/json", "Authorization": "Bearer YOUR_ACCESS_TOKEN" }; const body = { input: "systemYou are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.\nassistant", parameters: { decoding_method: "sample", max_new_tokens: 200, min_new_tokens: 100, random_seed: 42, stop_sequences: [], temperature: 0.7, top_k: 50, top_p: 1, repetition_penalty: 1 }, model_id: "ibm/granite-3-8b-instruct", project_id: "the-one-you-get" }; const response = await fetch(url, { headers, method: "POST", body: JSON.stringify(body) }); if (!response.ok) { throw new Error("Non-200 response"); } return await response.json(); }
import requests url = "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29" body = { "input": """systemYou are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. assistant""", "parameters": { "decoding_method": "sample", "max_new_tokens": 200, "min_new_tokens": 100, "random_seed": 42, "temperature": 0.7, "top_k": 50, "top_p": 1, "repetition_penalty": 1 }, "model_id": "ibm/granite-3-8b-instruct", "project_id": "the-one-you-get" } headers = { "Accept": "application/json", "Content-Type": "application/json", "Authorization": "Bearer YOUR_ACCESS_TOKEN" } response = requests.post( url, headers=headers, json=body ) if response.status_code != 200: raise Exception("Non-200 response: " + str(response.text)) data = response.json()
The only information which should be adjusted by the developer is the access token.
Et voilà ?
Conclusion
The watsonx.ai platform makes it very easy for application developers to adjust the set of LLM sampling parameters.
The above is the detailed content of How to set simply all 'sampling parameters” or 'generation parameters” for applications using watsonx?. For more information, please follow other related articles on the PHP Chinese website!

The article discusses Python's new "match" statement introduced in version 3.10, which serves as an equivalent to switch statements in other languages. It enhances code readability and offers performance benefits over traditional if-elif-el

Exception Groups in Python 3.11 allow handling multiple exceptions simultaneously, improving error management in concurrent scenarios and complex operations.

Function annotations in Python add metadata to functions for type checking, documentation, and IDE support. They enhance code readability, maintenance, and are crucial in API development, data science, and library creation.

The article discusses unit tests in Python, their benefits, and how to write them effectively. It highlights tools like unittest and pytest for testing.

Article discusses access specifiers in Python, which use naming conventions to indicate visibility of class members, rather than strict enforcement.

Article discusses Python's \_\_init\_\_() method and self's role in initializing object attributes. Other class methods and inheritance's impact on \_\_init\_\_() are also covered.

The article discusses the differences between @classmethod, @staticmethod, and instance methods in Python, detailing their properties, use cases, and benefits. It explains how to choose the right method type based on the required functionality and da

InPython,youappendelementstoalistusingtheappend()method.1)Useappend()forsingleelements:my_list.append(4).2)Useextend()or =formultipleelements:my_list.extend(another_list)ormy_list =[4,5,6].3)Useinsert()forspecificpositions:my_list.insert(1,5).Beaware


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Dreamweaver CS6
Visual web development tools

Dreamweaver Mac version
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools
