EXCLUSIVE OFFER: UNLOCK 15% SAVINGS IN LONDON! Claim Offer

How to Set Up an AI/Machine Learning Environment (PyTorch & TensorFlow)

Linux Server Reboot Guide
Ask AI to extract steps & commands from this tutorial:

The demand for Artificial Intelligence (AI) and Machine Learning (ML) is skyrocketing. Whether you are training Large Language Models (LLMs), running deep learning algorithms, or rendering complex data, relying on a standard CPU will cause massive bottlenecks. To train models efficiently, you need the massive parallel processing power of a dedicated NVIDIA GPU.

In this step-by-step tutorial, we will show you how to set up a professional AI/ML development environment using PyTorch and TensorFlow on an Ubuntu-based bare-metal server.

Prerequisites

  • A Dedicated GPU Server: You need a physical machine with an NVIDIA GPU (such as an NVIDIA A10, L4 Tensor Core, or RTX A4000). If you don't have one, you can deploy a high-performance UK GPU Dedicated Server with full root access at eServers.
  • Operating System: Ubuntu 22.04 LTS installed (highly recommended for AI environments).
  • Access: SSH access to your server with root or sudo privileges.

Step 1 — Update Your System

First, log into your server via SSH. Before installing any drivers, it is best practice to update your package lists and upgrade existing software to their latest versions.

Run the following commands:

bash
 
$ sudo apt update && sudo apt upgrade -y
$ sudo apt install build-essential dkms -y
                                            

Step 2 — Install the NVIDIA Drivers

To allow your operating system to communicate with your physical GPU, you need to install the proprietary NVIDIA drivers.

Ubuntu makes this easy with the ubuntu-drivers tool. Run this command to see which drivers are recommended for your specific GPU:

bash
 
$ ubuntu-drivers devices
                                            

To automatically install the best recommended driver for your hardware, run:

bash
 
$ sudo ubuntu-drivers autoinstall
                                            

Note: Once the installation is complete, you must reboot your server for the changes to take effect.

bash
 
$ sudo reboot
                                            

Step 3 — Verify the GPU Installation

After logging back into your server, check if the NVIDIA driver is working correctly by using the System Management Interface tool.

bash
 
$ nvidia-smi
                                            

If your installation was successful, you will see a table displaying your GPU's name (e.g., NVIDIA L4 or A10), memory usage, and the driver version. This is the ultimate proof that your bare-metal server is recognizing the GPU!

Step 4 — Install Anaconda (Miniconda)

When working with AI and Machine Learning, you will often need different versions of Python for different projects. The industry standard for managing ML environments is Conda. We will install Miniconda (a lightweight version of Anaconda).

Download and run the installer:

bash
 
$ mkdir -p ~/miniconda3
$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
$ bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
$ rm -rf ~/miniconda3/miniconda.sh
                                            

Initialize Conda so it starts automatically:

bash
 
$ ~/miniconda3/bin/conda init bash
                                            

Close and reopen your terminal (or run source ~/.bashrc) to apply the changes.

Step 5 — Create an Isolated AI Environment

Let's create a dedicated environment for our ML projects using Python 3.10.

bash
 
$ conda create -n ai_env python=3.10 -y
$ conda activate ai_env
                                            

You will now see (ai_env) at the beginning of your terminal prompt.

Step 6 — Install PyTorch and TensorFlow with GPU Support

Now for the exciting part—installing the actual Machine Learning frameworks. Because we are using Conda, it will automatically handle the complex CUDA toolkits required for the GPU.

To install TensorFlow (with GPU support):

bash
 
$ pip install tensorflow[and-cuda]
                                            

To install PyTorch (with GPU support):

bash
 
$ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia -y
                                            

Step 7 — Verify AI Frameworks are Using the GPU

Finally, let's write a quick Python script to ensure that both PyTorch and TensorFlow can successfully "see" and utilize your NVIDIA GPU.

Type python in your terminal to open the Python interactive shell, and paste the following code:

python
 
import tensorflow as tf
import torch

# Check TensorFlow
print("TensorFlow GPU Available: ", len(tf.config.list_physical_devices('GPU')) > 0)

# Check PyTorch
print("PyTorch GPU Available: ", torch.cuda.is_available())
if torch.cuda.is_available():
    print("PyTorch GPU Name: ", torch.cuda.get_device_name(0))
     

If both return True, congratulations! Your server is fully configured and ready to train massive machine learning models.

Conclusion: Ready to Scale Your AI Workloads?

Setting up the environment is just the first step. To train models at maximum efficiency, you need hardware that doesn't bottleneck your data.

At eServers, our GPU Dedicated Servers are housed in secure UK data centers, featuring blazing-fast NVMe storage and the latest NVIDIA enterprise cards (including A10, L4, and RTX series). Explore our bare-metal solutions today and get the uncompromising compute power your AI projects deserve.

Discover eServers Dedicated Server Locations

eServers provides reliable dedicated servers across multiple global regions. Whether you need low latency, regional compliance, or proximity to your audience, our wide geographic coverage ensures the perfect hosting environment for your project.

Our Bandwith providers

We are Partners with 15 +

At eServers , we proudly partner with 15+ leading global tech providers to deliver secure, high-performance hosting solutions. These trusted alliances with top hardware, software, and network innovators ensure our clients benefit from modern technology and enterprise-grade reliability.

Hosting Solutions