After spending significant amounts of time on the Custom Router Build it soon became apparent that there was stacks of headroom left in the M720Q Tiny for other things. So how could I run pfSense, the primary reason for the device, along with other processes or applications that would be useful or just plain fun?
Enter Proxmox
Proxmox is a Virtual Environment and open-source platform for virtualisation. With Proxmox we can make VMs and containers for pretty much anything we need. Fundamentally it is a Class 1 Hyper-visor that runs on bare metal. My initial plans were to run pfSense as a VM then DNS sinkhole Pi-Hole as a container. While I am at it why not spin up an instance of Mint Linux and a container purely to run tshark and learn how to automate packet captures?
Piece of Cake
Installing Proxmox itself is as easy as creating a USB version of the ISO and booting from that. Once installed creating VMs and containers is even easier.
Uploading ISOs is straightforward.
On top of all that the GUI and overall set up of Proxmox is extremely intuitive.
Datacenter will show you the overall server or cluster. Then you have individual nodes – here we have just the one. Inside each node are the VMs and containers.
We can see the Network set up for our ‘Prox’ node. Notice the two virtual bridges that are assigned physical network adapters.
Mint installed as a VM
Pi-hole as a container
Logs
Making a diagram always helps. This set up is the core of my home SOHO network.
So, by using Proxmox on the M720Q Tiny I have pfSense, Pi-Hole and VM for Mint and a container for packet capture. So far the load on the device is not high at all. May want to get some more RAM soon.
Virtual Machines are all the rage these days and there are many reasons for this. It can be a very cost effective way to run multiple ‘machines’. If you have one powerful server then you can have many VMs running at the same time, all performing different tasks. On top of that it is easy to add and remove them as you require.
VMs are easy to save (snapshot) and restore when things go wrong and also VMs can have enhanced security features such as isolation and secure boot. You can control the resource allocation with clicks of a mouse and you can run pretty much any OS you like, depending on licenses of course.
Hyper-V itself, and other hypervisors, allow for automation and simple management of your VMs with a simple GUI.
Installing a VM on Hyper-V
So let’s say that you want to try out your first VM on Windows, let’s have a look at the steps for that. (This will require administrative permissions). You will need an installer file for the OS you want to install. Linux Distros such as Ubuntu or Kali are free and easy to download.
First of all we need to enable the Hyper-V feature so go to Control Panel > Programs >Turn Windows Features on and off and you will be presented with this dialogue:
Then you check the box next to Hyper-V and press OK.
Next we can click in the search bar for windows and type in Hyper-V and should see these options:
Choose Run as administrator.
This will open the Hyper-V manager window and we can get started on creating our first VM! Click on New:
This will start the New Virtual Machine Wizard. Click Next:
First option to choose is name and location for your VM. Here I am going to create an instance of Kali-Linux so I will call it Kali. When you get into many VMs you will want to have a regular naming convention so it’s best to start thinking about this early on. There is also the option to choose location here but to begin with I would just go with the default:
Generation. Generally speaking Generation 1 is BIOS and Generation 2 is UEFI. Unless you are using legacy OS versions go with Generation 2:
Assign memory. Depending on your host machine, VM OS and number of VMs you want to try and use at the same time you can set the allocated memory for the VM we are installing right now. You want to give a VM at least 4GB if you can. This is why VM workstations have 32 or 64 GB of RAM:
Configure Networking. Do you want your VM to talk with the network? Or just keep it isolated? Choose Default Switch for the former:
Similarly to the RAM allocation we now want to Create a VHD. Choose default location for now and decide the size you want to allocate:
Installation options. Choose how you want to install the OS. Usually best to point it at the ISO you have downloaded:
With all that done you can check the config you have asked for and click Finish:
Now we are back at the Hyper-V Manager windows and our Kali VM is present. Click on it and in the lower right hand section click on Start and then Connect:
All being well the machine will start and you will then go through the process of installing the OS as normal:
Using Hyper-V To Your Advantage
Once you have installed a virtual machine on Hyper-V and it is running well, there are several useful features that Hyper-V provides to manage and maintain the virtual machine:
Snapshots: Hyper-V allows you to take snapshots of the virtual machine’s state, including the operating system and all data and applications. Snapshots can be used to revert the virtual machine to a previous state in the event of a problem or to test changes before making them permanent.
Live Migration: With Live Migration, you can move a running virtual machine from one physical host to another without any downtime. This can be useful for maintenance tasks, such as hardware upgrades or repairs, or to load balance virtual machines across multiple hosts.
High Availability: Hyper-V provides High Availability features, such as failover clustering, to ensure that virtual machines are always available, even in the event of a host failure. This helps to minimise downtime and ensures business continuity.
Resource allocation: Hyper-V allows you to allocate resources, such as CPU, memory, and storage, to virtual machines as needed. You can also use resource controls, such as resource metering and resource allocation policies, to ensure that virtual machines have the resources they need to run effectively.
Network virtualisation: Hyper-V provides network virtualization features, such as virtual switches, that allow you to create virtual networks that are isolated from the host and from each other. This helps to secure virtual machines and to control network traffic.
Remote management: Hyper-V allows you to manage virtual machines remotely, using the Hyper-V Manager or Windows PowerShell. This makes it easier to manage virtual machines from a centralized location and to automate management tasks.
Backup and recovery: Hyper-V provides backup and recovery features, such as checkpoint-based backups and replication, to help protect virtual machines and ensure that they can be recovered in the event of a disaster.
These are just some of the useful features that Hyper-V provides for virtual machines once they are installed and running. By leveraging these features, you can manage and maintain virtual machines more effectively and ensure that they are always available and running optimally.
For my first attempt at a virtual lab to simulate an enterprise environment I was tempted to install devices listed in a reddit post that inspired me to start this project:
Virtual security appliances – firewalls etc
Virtual switches
Domain controller
DNS server
DHCP Server
SIEM
IDS
Endpoints
Active Directory
Groups
Clearly this would be trying to run before I could walk so I asked ChatGPT for a simple network to virtualise in Azure:
Sure! Here is a basic network setup that you can try to implement in Azure:
Create a virtual network (VNet) in Azure. A VNet is a logically isolated section of Azure’s cloud network where you can create and manage your Azure resources.
Create a subnet within the VNet. A subnet is a range of IP addresses within a VNet that you can use to segment your network.
Create a network security group (NSG) and associate it with the subnet. An NSG is a firewall that controls inbound and outbound traffic to and from your resources.
Create an Azure virtual machine (VM) and associate it with the subnet. An Azure VM is a virtual machine that runs in the cloud and can be used to host applications and services.
Connect the VM to the Internet by creating a public IP address and associating it with the VM. A public IP address is an IP address that can be accessed from the Internet.
Configure the NSG to allow inbound and outbound traffic to and from the VM. You can do this by creating inbound and outbound rules that specify the source and destination of the traffic and the port number.
Test the network setup by connecting to the VM using Remote Desktop Protocol (RDP) or SSH. You can use RDP to connect to a Windows VM, and you can use SSH to connect to a Linux VM.
I hope this helps! Let me know if you have any questions.
So going from this:
Project Details
Subscription = Primary_subscription
Resource group = Basic_resourcegroup
Virtual Network Name = Basic_vNW
Region = UK South
Subnet
Subnet = Basic_subnet
Starting Address = 10.0.1.0
Subnet size = /29 (8 addresses)
No Security
IP Address Space = 10.0.0.0 – 10.0.0.255 – No Gateway
Network Security Group
NSG = Basic_nsg
Virtual Machine
VM = BasicVM
Size = Standard_B2ms ($0.1 per hour)
Image = Win 10 Pro 21H2
Admin= Basic_admin
Password= *******************
Public inbound ports= RDP 3389
Connect VM to Internet
Public IP = *********
Configure NSG
Inbound rule: Allow MyIPAddress – Port 3389 – TCP – Source 51.199.159.108 – Dest Any – Allow
Inbound rule: AllowAnySSHInbound – Port 22 – TCP – Source Any – Dest Any – Allow
Outbound rule: AllowCidrBlockHTTPSOutbound – Port 443 – TCP – Source 10.0.1.0/29 – Dest Any – Allow
Testing
Testing the network by connecting via RDP does allow me to connect to the VM but I cannot ping it or connect via SSH at this time.
Conclusion
It seems that I managed to implement the virtual lab into Azure to a degree. There are still a lot of question marks about the virtual network and the network security group. The rules set in the NSG didn’t seem to have an effect on the VM so there’s work to be done there.
I want to gain experience of creating and managing an enterprise level IT network. So far I have studied and gained CompTIA A+, N+ and am in the process of getting my S+ but apart from managing the LAN at home I have no experience of commercial networks and IT infrastructure.
I am not interested in the cost and hassle of buying and setting up old/legacy networking equipment that has been discarded by enterprise. Surely a very inefficient way of learning in this day and age.
So the decision moving forward comes down to new modern workstation capable of virtualising an enterprise network set up or moving to a cloud environment.
Options
A virtualisation workstation for my home office and building VMs and virtual networks. Although this is appealing in that it’s always nice to buy new equipment and it’s something tangible and lasting to show for the money spent, is it really the best option?
Wouldn’t it be wiser to do all of this in the cloud? For a start it would create much less of an impact on my energy bills. On top of that it would be cheaper in the short term and most importantly it will give valuable experience in the cloud.
Cost
Virtualisation Workstation – with 150w power draw @35p per kWh
Intel Core i9 – £350
Motherboard – £175
Ram (64GB) – £185
Case – £120
PSU (650W) – £120
SSD (M.2 2TB) – £220
Liquid cooler – £120
Windows 11 – £120
Total – £1410
Cloud – usage per day – 1p per VM per hr. Enterprise set up ~ 7p per hr. Maybe 50p per day, £2.50 a week, £10 per month. £120 per year. (This is a very generous provision of cloud compute. An instance on Linode can be $0.05 per hour.)
Thin Client – £300 A new thin client would make sense in that it would be far more energy efficient and it would run Windows 11. (This option can be taken up at any time).
Power draw from a device like this is a fraction of a workstation. This is very hard to assess accurately but an Intel i9 will be using around 250W when it’s operating 5 or more VMs. A thin client could be using 20 to 30W as it’s only controlling and monitoring the cloud VMs. At 5 hours a day and 5 days a week that compares as 37.5 kWh to 187.5 kWh for the whole year.
At current UK prices that is £13 compared with £100.
So roughly speaking you are looking at 10 years of VM usage compared with buying a workstation.
Pros and Cons
Physical
Virtual
Pros
Cons
Pros
Cons
Will last for a long time
High initial cost
Experience in Cloud
Nothing to show for it at the end of the day
Nice thing to have
High energy usage
Much cheaper in the long run
No modern PC for everyday usage
Modern PC for all work
Noisy?
Can be accessed anywhere/device
Overspend possible
Experience in building
Impact on environment
On demand, elastic.
Supporting huge corporation. (Linode?)
Experience in using
Stuck with config once bought
Low impact on environment
Requires decent connection
Clearly the way of the future
Not as direct and ‘snappy’ as physical system
Will ‘last’ forever
Cost of modern thin client
Conclusion
While both options are appealing the deciding factors in favour of the cloud are manifold:
Cheaper
Far more flexible
Experience in the future of IT
The workstation option can always be taken up if the cloud isn’t working out
Expanding on the last point – I will be needing a modern Windows 11 machine in my office at some point and knowledge and experience gained from the cloud will point me in the right direction for what sort of machine that needs to be.