Buy Me a Coffee☕
ColorJitter() can change the brightness, contrast, saturation and hue of zero or more images as shown below:
*Memos:
- The 1st argument for initialization is brightness(Optional-Default:0-Type:float or tuple/list(float)):
*Memos:
- It's the range of the brightness [min, max].
- It must be 0
- A single value is converted to [max(0, 1-brightness), 1 brightness].
- A tuple or list must be the 1D with 2 elements. *The 1st element must be less than or equal to the 2nd element.
- The 2nd argument for initialization is contrast(Optional-Default:0-Type:float or tuple/list(float)):
*Memos:
- It's the range of the contrast [min, max].
- It must be 0
- A single value is converted to [max(0, 1-contrast), 1 contrast].
- A tuple or list must be the 1D with 2 elements. *The 1st element must be less than or equal to the 2nd element.
- The 3rd argument for initialization is saturation(Optional-Default:0-Type:float or tuple/list(float)):
*Memos:
- It's the range of the saturation [min, max].
- It must be 0
- A single value is converted to [max(0, 1-saturation), 1 saturation].
- A tuple or list must be the 1D with 2 elements. *The 1st element must be less than or equal to the 2nd element.
- The 4th argument for initialization is hue(Optional-Default:0-Type:float or tuple/list(float)):
*Memos:
- It's the range of the hue [min, max].
- It must be -0.5
- A single value is converted to [-hue, hue].
- A tuple or list must be the 1D with 2 elements. *The 1st element must be less than or equal to the 2nd element.
- The 1st argument is img(Required-Type:PIL Image or tensor/tuple/list(int or float)):
*Memos:
- It must be 2D or 3D. For 3D, the deepest D must have one element.
- Don't use img=.
- v2 is recommended to use according to V1 or V2? Which one should I use?.
from torchvision.datasets import OxfordIIITPet from torchvision.transforms.v2 import ColorJitter colorjitter = ColorJitter() colorjitter = ColorJitter(brightness=0, contrast=0, saturation=0, hue=0) colorjitter = ColorJitter(brightness=(1.0, 2.0), contrast=(1.0, 1.0), saturation=(1.0, 1.0), hue=(0.0, 0.0)) colorjitter # ColorJitter() print(colorjitter.brightness) # None print(colorjitter.contrast) # None print(colorjitter.saturation) # None print(colorjitter.hue) # None origin_data = OxfordIIITPet( root="data", transform=None # transform=ColorJitter() # colorjitter = ColorJitter(brightness=0, # contrast=0, # saturation=0, # hue=0) # transform=ColorJitter(brightness=(1.0, 1.0), # contrast=(1.0, 1.0), # saturation=(1.0, 1.0), # hue=(0.0, 0.0)) ) p2bright_data = OxfordIIITPet( # `p` is plus. root="data", transform=ColorJitter(brightness=2.0) # transform=ColorJitter(brightness=(0.0, 3.0)) ) p2p2bright_data = OxfordIIITPet( root="data", transform=ColorJitter(brightness=(2.0, 2.0)) ) p05p05bright_data = OxfordIIITPet( root="data", transform=ColorJitter(brightness=(0.5, 0.5)) ) p2contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=2.0) # transform=ColorJitter(contrast=(0.0, 3.0)) ) p2p2contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=(2.0, 2.0)) ) p05p05contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=(0.5, 0.5)) ) p2satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=2.0) # transform=ColorJitter(saturation=(0.0, 3.0)) ) p2p2satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=(2.0, 2.0)) ) p05p05satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=(0.5, 0.5)) ) p05hue_data = OxfordIIITPet( root="data", transform=ColorJitter(hue=0.5) # transform=ColorJitter(hue=(-0.5, 0.5)) ) p025p025hue_data = OxfordIIITPet( root="data", transform=ColorJitter(hue=(0.25, 0.25)) ) m025m025hue_data = OxfordIIITPet( # `m` is minus. root="data", transform=ColorJitter(hue=(-0.25, -0.25)) ) import matplotlib.pyplot as plt def show_images(data, main_title=None): plt.figure(figsize=(10, 5)) plt.suptitle(t=main_title, y=0.8, fontsize=14) for i, (im, _) in zip(range(1, 6), data): plt.subplot(1, 5, i) plt.imshow(X=im) plt.xticks(ticks=[]) plt.yticks(ticks=[]) plt.tight_layout() plt.show() show_images(data=origin_data, main_title="origin_data") show_images(data=p2bright_data, main_title="p2bright_data") show_images(data=p2p2bright_data, main_title="p2p2bright_data") show_images(data=p05p05bright_data, main_title="p05p05bright_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p2contra_data, main_title="p2contra_data") show_images(data=p2p2contra_data, main_title="p2p2contra_data") show_images(data=p05p05contra_data, main_title="p05p05contra_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p2satura_data, main_title="p2satura_data") show_images(data=p2p2satura_data, main_title="p2p2satura_data") show_images(data=p05p05satura_data, main_title="p05p05satura_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p05hue_data, main_title="p05hue_data") show_images(data=p025p025hue_data, main_title="p025p025hue_data") show_images(data=m025m025hue_data, main_title="m025m025hue_data")
from torchvision.datasets import OxfordIIITPet from torchvision.transforms.v2 import ColorJitter colorjitter = ColorJitter() colorjitter = ColorJitter(brightness=0, contrast=0, saturation=0, hue=0) colorjitter = ColorJitter(brightness=(1.0, 2.0), contrast=(1.0, 1.0), saturation=(1.0, 1.0), hue=(0.0, 0.0)) colorjitter # ColorJitter() print(colorjitter.brightness) # None print(colorjitter.contrast) # None print(colorjitter.saturation) # None print(colorjitter.hue) # None origin_data = OxfordIIITPet( root="data", transform=None # transform=ColorJitter() # colorjitter = ColorJitter(brightness=0, # contrast=0, # saturation=0, # hue=0) # transform=ColorJitter(brightness=(1.0, 1.0), # contrast=(1.0, 1.0), # saturation=(1.0, 1.0), # hue=(0.0, 0.0)) ) p2bright_data = OxfordIIITPet( # `p` is plus. root="data", transform=ColorJitter(brightness=2.0) # transform=ColorJitter(brightness=(0.0, 3.0)) ) p2p2bright_data = OxfordIIITPet( root="data", transform=ColorJitter(brightness=(2.0, 2.0)) ) p05p05bright_data = OxfordIIITPet( root="data", transform=ColorJitter(brightness=(0.5, 0.5)) ) p2contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=2.0) # transform=ColorJitter(contrast=(0.0, 3.0)) ) p2p2contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=(2.0, 2.0)) ) p05p05contra_data = OxfordIIITPet( root="data", transform=ColorJitter(contrast=(0.5, 0.5)) ) p2satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=2.0) # transform=ColorJitter(saturation=(0.0, 3.0)) ) p2p2satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=(2.0, 2.0)) ) p05p05satura_data = OxfordIIITPet( root="data", transform=ColorJitter(saturation=(0.5, 0.5)) ) p05hue_data = OxfordIIITPet( root="data", transform=ColorJitter(hue=0.5) # transform=ColorJitter(hue=(-0.5, 0.5)) ) p025p025hue_data = OxfordIIITPet( root="data", transform=ColorJitter(hue=(0.25, 0.25)) ) m025m025hue_data = OxfordIIITPet( # `m` is minus. root="data", transform=ColorJitter(hue=(-0.25, -0.25)) ) import matplotlib.pyplot as plt def show_images(data, main_title=None): plt.figure(figsize=(10, 5)) plt.suptitle(t=main_title, y=0.8, fontsize=14) for i, (im, _) in zip(range(1, 6), data): plt.subplot(1, 5, i) plt.imshow(X=im) plt.xticks(ticks=[]) plt.yticks(ticks=[]) plt.tight_layout() plt.show() show_images(data=origin_data, main_title="origin_data") show_images(data=p2bright_data, main_title="p2bright_data") show_images(data=p2p2bright_data, main_title="p2p2bright_data") show_images(data=p05p05bright_data, main_title="p05p05bright_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p2contra_data, main_title="p2contra_data") show_images(data=p2p2contra_data, main_title="p2p2contra_data") show_images(data=p05p05contra_data, main_title="p05p05contra_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p2satura_data, main_title="p2satura_data") show_images(data=p2p2satura_data, main_title="p2p2satura_data") show_images(data=p05p05satura_data, main_title="p05p05satura_data") show_images(data=origin_data, main_title="origin_data") show_images(data=p05hue_data, main_title="p05hue_data") show_images(data=p025p025hue_data, main_title="p025p025hue_data") show_images(data=m025m025hue_data, main_title="m025m025hue_data")
The above is the detailed content of ColorJitter in PyTorch. For more information, please follow other related articles on the PHP Chinese website!

TomergelistsinPython,youcanusethe operator,extendmethod,listcomprehension,oritertools.chain,eachwithspecificadvantages:1)The operatorissimplebutlessefficientforlargelists;2)extendismemory-efficientbutmodifiestheoriginallist;3)listcomprehensionoffersf

In Python 3, two lists can be connected through a variety of methods: 1) Use operator, which is suitable for small lists, but is inefficient for large lists; 2) Use extend method, which is suitable for large lists, with high memory efficiency, but will modify the original list; 3) Use * operator, which is suitable for merging multiple lists, without modifying the original list; 4) Use itertools.chain, which is suitable for large data sets, with high memory efficiency.

Using the join() method is the most efficient way to connect strings from lists in Python. 1) Use the join() method to be efficient and easy to read. 2) The cycle uses operators inefficiently for large lists. 3) The combination of list comprehension and join() is suitable for scenarios that require conversion. 4) The reduce() method is suitable for other types of reductions, but is inefficient for string concatenation. The complete sentence ends.

PythonexecutionistheprocessoftransformingPythoncodeintoexecutableinstructions.1)Theinterpreterreadsthecode,convertingitintobytecode,whichthePythonVirtualMachine(PVM)executes.2)TheGlobalInterpreterLock(GIL)managesthreadexecution,potentiallylimitingmul

Key features of Python include: 1. The syntax is concise and easy to understand, suitable for beginners; 2. Dynamic type system, improving development speed; 3. Rich standard library, supporting multiple tasks; 4. Strong community and ecosystem, providing extensive support; 5. Interpretation, suitable for scripting and rapid prototyping; 6. Multi-paradigm support, suitable for various programming styles.

Python is an interpreted language, but it also includes the compilation process. 1) Python code is first compiled into bytecode. 2) Bytecode is interpreted and executed by Python virtual machine. 3) This hybrid mechanism makes Python both flexible and efficient, but not as fast as a fully compiled language.

Useaforloopwheniteratingoverasequenceorforaspecificnumberoftimes;useawhileloopwhencontinuinguntilaconditionismet.Forloopsareidealforknownsequences,whilewhileloopssuitsituationswithundeterminediterations.

Pythonloopscanleadtoerrorslikeinfiniteloops,modifyinglistsduringiteration,off-by-oneerrors,zero-indexingissues,andnestedloopinefficiencies.Toavoidthese:1)Use'i


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Dreamweaver CS6
Visual web development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft
