1Gbps Dedicated Servers
Unmetered Dedicated Servers
AMD Dedicated Servers
Gaming Dedicated Servers
10Gbps Dedicated Servers
GPU Servers
Intel Dedicated Servers
DDOS Dedicated Servers
About Us
Contact Us
Blogs
Tutorials
The demand for Artificial Intelligence (AI) and Machine Learning (ML) is skyrocketing. Whether you are training Large Language Models (LLMs), running deep learning algorithms, or rendering complex data, relying on a standard CPU will cause massive bottlenecks. To train models efficiently, you need the massive parallel processing power of a dedicated NVIDIA GPU.
In this step-by-step tutorial, we will show you how to set up a professional AI/ML development environment using PyTorch and TensorFlow on an Ubuntu-based bare-metal server.
root
sudo
First, log into your server via SSH. Before installing any drivers, it is best practice to update your package lists and upgrade existing software to their latest versions.
Run the following commands:
$ sudo apt update && sudo apt upgrade -y $ sudo apt install build-essential dkms -y
To allow your operating system to communicate with your physical GPU, you need to install the proprietary NVIDIA drivers.
Ubuntu makes this easy with the ubuntu-drivers tool. Run this command to see which drivers are recommended for your specific GPU:
ubuntu-drivers
$ ubuntu-drivers devices
To automatically install the best recommended driver for your hardware, run:
$ sudo ubuntu-drivers autoinstall
Note: Once the installation is complete, you must reboot your server for the changes to take effect.
$ sudo reboot
After logging back into your server, check if the NVIDIA driver is working correctly by using the System Management Interface tool.
$ nvidia-smi
If your installation was successful, you will see a table displaying your GPU's name (e.g., NVIDIA L4 or A10), memory usage, and the driver version. This is the ultimate proof that your bare-metal server is recognizing the GPU!
When working with AI and Machine Learning, you will often need different versions of Python for different projects. The industry standard for managing ML environments is Conda. We will install Miniconda (a lightweight version of Anaconda).
Download and run the installer:
$ mkdir -p ~/miniconda3 $ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh $ bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 $ rm -rf ~/miniconda3/miniconda.sh
Initialize Conda so it starts automatically:
$ ~/miniconda3/bin/conda init bash
Close and reopen your terminal (or run source ~/.bashrc) to apply the changes.
source ~/.bashrc
Let's create a dedicated environment for our ML projects using Python 3.10.
$ conda create -n ai_env python=3.10 -y $ conda activate ai_env
You will now see (ai_env) at the beginning of your terminal prompt.
(ai_env)
Now for the exciting part—installing the actual Machine Learning frameworks. Because we are using Conda, it will automatically handle the complex CUDA toolkits required for the GPU.
To install TensorFlow (with GPU support):
$ pip install tensorflow[and-cuda]
To install PyTorch (with GPU support):
$ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia -y
Finally, let's write a quick Python script to ensure that both PyTorch and TensorFlow can successfully "see" and utilize your NVIDIA GPU.
Type python in your terminal to open the Python interactive shell, and paste the following code:
python
import tensorflow as tf import torch # Check TensorFlow print("TensorFlow GPU Available: ", len(tf.config.list_physical_devices('GPU')) > 0) # Check PyTorch print("PyTorch GPU Available: ", torch.cuda.is_available()) if torch.cuda.is_available(): print("PyTorch GPU Name: ", torch.cuda.get_device_name(0))
If both return True, congratulations! Your server is fully configured and ready to train massive machine learning models.
Setting up the environment is just the first step. To train models at maximum efficiency, you need hardware that doesn't bottleneck your data.
At eServers, our GPU Dedicated Servers are housed in secure UK data centers, featuring blazing-fast NVMe storage and the latest NVIDIA enterprise cards (including A10, L4, and RTX series). Explore our bare-metal solutions today and get the uncompromising compute power your AI projects deserve.
eServers provides reliable dedicated servers across multiple global regions. Whether you need low latency, regional compliance, or proximity to your audience, our wide geographic coverage ensures the perfect hosting environment for your project.
At eServers , we proudly partner with 15+ leading global tech providers to deliver secure, high-performance hosting solutions. These trusted alliances with top hardware, software, and network innovators ensure our clients benefit from modern technology and enterprise-grade reliability.