Date: December 9, 2025
ROCm Version: 7.1.1 (Production) / 7.9.0 (Technology Preview)
GPU: AMD Radeon 8060S (RDNA 3.5, Strix Halo)
ROCm 7.1.1 is the current production release for AMD GPUs. The GZ302’s Radeon 8060S (RDNA 3.5 architecture) can leverage ROCm for AI/ML workloads, though official support status depends on AMD’s hardware support matrix.
The Radeon 8060S (Strix Halo) is part of the RDNA 3.5 generation. ROCm support considerations:
HSA_OVERRIDE_GFX_VERSION=11.0.0 for RDNA 3.x compatibility| Component | Status | Notes |
|---|---|---|
| ROCm 7.1.1 Runtime | ⚠️ Unofficial | Consumer GPU, may need overrides |
| PyTorch with ROCm | ✅ Supported | Via gfx1100 compatibility mode |
| TensorFlow with ROCm | ✅ Supported | Via gfx1100 compatibility mode |
| Ollama with ROCm | ✅ Supported | Works with override |
| bitsandbytes | ✅ Supported | RDNA 3 compatible |
| MIOpen | ✅ Supported | Via gfx1100 mode |
For Arch Linux:
# Install ROCm from official repos
sudo pacman -S rocm-hip-runtime rocm-opencl-runtime
# Install development tools
sudo pacman -S rocm-hip-sdk rocm-opencl-sdk
# Set environment variables
export HSA_OVERRIDE_GFX_VERSION=11.0.0
export ROCM_PATH=/opt/rocm
For Ubuntu:
# Add ROCm repository
wget https://repo.radeon.com/amdgpu-install/latest/ubuntu/jammy/amdgpu-install_6.3.60300-1_all.deb
sudo apt install ./amdgpu-install_6.3.60300-1_all.deb
# Install ROCm
sudo amdgpu-install --usecase=rocm
# Set environment
export HSA_OVERRIDE_GFX_VERSION=11.0.0
Using Conda:
# Create environment with PyTorch ROCm
conda create -n rocm python=3.11
conda activate rocm
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
# Or use PyTorch ROCm build
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2
Using Official ROCm Containers:
# Pull ROCm PyTorch container
docker pull rocm/pytorch:rocm7.1.1_ubuntu22.04_py3.10_pytorch_release_2.3.0
# Run with GPU access
docker run -it --device=/dev/kfd --device=/dev/dri \
--group-add video --cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
rocm/pytorch:rocm7.1.1_ubuntu22.04_py3.10_pytorch_release_2.3.0
Add to ~/.bashrc or ~/.zshrc:
# ROCm Paths
export ROCM_PATH=/opt/rocm
export PATH=$ROCM_PATH/bin:$PATH
export LD_LIBRARY_PATH=$ROCM_PATH/lib:$LD_LIBRARY_PATH
# GPU Override for RDNA 3.5 (gfx1150)
export HSA_OVERRIDE_GFX_VERSION=11.0.0
# Optional: Performance tuning
export GPU_MAX_HEAP_SIZE=100
export GPU_MAX_ALLOC_PERCENT=100
export HSA_ENABLE_SDMA=0
# Check ROCm installation
rocminfo
# Check GPU detection
rocm-smi
# Test PyTorch ROCm
python3 -c "import torch; print(f'ROCm available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"None\"}')"
Problem: Radeon 8060S (gfx1150) may not be in ROCm’s official support matrix.
Workaround:
export HSA_OVERRIDE_GFX_VERSION=11.0.0
This tells ROCm to treat the GPU as gfx1100 (RDNA 3.0), which is officially supported.
Problem: Ollama may not automatically detect ROCm on consumer GPUs.
Workaround:
# Force Ollama to use ROCm
OLLAMA_ROCM=1 ollama serve
# Or set permanently
echo 'export OLLAMA_ROCM=1' >> ~/.bashrc
Problem: PyTorch defaults to CUDA terminology even with ROCm.
Expected Behavior:
torch.cuda.is_available() returns True with ROCmtorch.cuda.device_count() shows your GPUProblem: Large models may fail to allocate memory.
Workaround:
# Increase heap size
export GPU_MAX_HEAP_SIZE=100
export GPU_MAX_ALLOC_PERCENT=100
# For Ollama, limit context
ollama run llama3.1:8b --ctx-size 4096
LLM Inference (with Ollama):
Stable Diffusion:
PyTorch Training:
✅ Good for:
❌ Not ideal for:
✅ Good for:
The gz302-llm.sh module has been updated to support ROCm 7.1.1:
# Install LLM tools with ROCm 7.1.1
sudo ./gz302-llm.sh
# Features:
# - Ollama with ROCm support
# - PyTorch with ROCm 7.1.1
# - bitsandbytes for RDNA 3
# - Environment variable setup
# - Automatic GPU detection
The setup script automatically:
#!/bin/bash
echo "=== ROCm 7.1.1 Setup Test ==="
echo
# 1. Check ROCm installation
echo "1. ROCm Installation:"
if command -v rocminfo >/dev/null 2>&1; then
echo "✓ rocminfo found"
rocminfo | grep "Name:" | head -1
else
echo "✗ rocminfo not found"
fi
echo
# 2. Check GPU detection
echo "2. GPU Detection:"
if command -v rocm-smi >/dev/null 2>&1; then
echo "✓ rocm-smi found"
rocm-smi --showproductname || echo "GPU detection failed"
else
echo "✗ rocm-smi not found"
fi
echo
# 3. Check environment
echo "3. Environment Variables:"
echo "ROCM_PATH: ${ROCM_PATH:-not set}"
echo "HSA_OVERRIDE_GFX_VERSION: ${HSA_OVERRIDE_GFX_VERSION:-not set}"
echo
# 4. Test PyTorch
echo "4. PyTorch ROCm:"
python3 -c "
import torch
print(f'PyTorch version: {torch.__version__}')
print(f'ROCm available: {torch.cuda.is_available()}')
if torch.cuda.is_available():
print(f'Device count: {torch.cuda.device_count()}')
print(f'Device name: {torch.cuda.get_device_name(0)}')
" 2>/dev/null || echo "PyTorch not installed or ROCm not detected"
echo
# 5. Test Ollama
echo "5. Ollama:"
if command -v ollama >/dev/null 2>&1; then
echo "✓ Ollama installed"
ollama list 2>/dev/null || echo "No models installed"
else
echo "✗ Ollama not installed"
fi
ROCm 7.1.1 can work with the Radeon 8060S using the HSA_OVERRIDE_GFX_VERSION=11.0.0 workaround. While not officially supported as a consumer/mobile GPU, it provides good AI/ML performance for:
Recommendation: Use ROCm 7.1.1 for AI/ML workloads on the GZ302. The setup is straightforward, performance is good, and the gz302-llm.sh module automates the installation.
Document Version: 1.0
Last Updated: December 9, 2025
Tested On: GZ302EA-XS99 with Radeon 8060S
Status: Production Ready