Category Archive:Storage

So far my eldest has all of her games on the gamecache drive.  Unfortunately ElderScrolls Online refused to run on the Gamecache drive.  It would hang up the network card causing all network connectivity to go away.  Only a restart of the machine would fix it.  However the fact that all of the other games are on the gamecache drive means the monster game now fits on her local storage without running out of local space.  I’ll troubleshoot the problems later(it think I need to put in an Intel NIC).  I will try a few things once I can order the new nic.

It is working out better than I anticipated.  Right now the eldest has been installing all of her games on the G drive.  I found out one of her games is nearly 200 Gigs in size.  Holy crap batman.  She can easily chew through more than 400 gigs of storage just with the games she plays.  here are the ones that take up the most space.:

  1.  Skyrim
  2. Diablo III
  3. Starcraft II
  4. Lord of the Rings Online
  5. Elder Scrolls Online(this one is the nearly 225 gigabyte monster)

She informed me she has a ton of smaller games that she will be installing now that she has the space.  She also asked..what happens if i run out my 1 Terabyte allocation?  I told her it is a couple of mouse clicks to add more space.  Let’s see if she chews through the whole terabyte..if she does I have 3.2 terabytes waiting..:)

*Game Cache Update* Is the built in compression.  For vm’s you can sometimes get 5-10x compression because vms are mostly empty space.  ZFS does compression transparently.  Right now as part of the eldest’s game cache her system thinks it has written 75 gigabytes of data.  ZFS compression has reduced that down to 52.5G in the background.  This is roughly a 1.41x reduction in size just from basic compression.  Normally with my file types(movies, music..mainly stuff that is already compressed) I do not see any real compression.  With her steam apps the compression is much higher.  It will be interesting to see if it goes up or down as she loads up the rest of her games.

Data/Steamcache type volume –
Data/Steamcache creation Tue Nov 19 17:11 2019 –
Data/Steamcache used 52.5G –
Data/Steamcache available 4.17T –
Data/Steamcache referenced 52.5G –
Data/Steamcache compressratio 1.41x –
Data/Steamcache reservation none default
Data/Steamcache volsize 1.00T local
Data/Steamcache volblocksize 128K –
Data/Steamcache checksum on default
Data/Steamcache compression lz4 inherited from Data
Data/Steamcache readonly off default
Data/Steamcache copies 1 inherited from Data
Data/Steamcache refreservation none default
Data/Steamcache primarycache all default
Data/Steamcache secondarycache all default
Data/Steamcache usedbysnapshots 0 –
Data/Steamcache usedbydataset 52.5G –
Data/Steamcache usedbychildren 0 –
Data/Steamcache usedbyrefreservation 0 –
Data/Steamcache logbias latency default
Data/Steamcache dedup off default
Data/Steamcache mlslabel –
Data/Steamcache sync disabled inherited from Data
Data/Steamcache refcompressratio 1.41x –
Data/Steamcache written 52.5G –
Data/Steamcache logicalused 74.0G –
Data/Steamcache logicalreferenced 74.0G –
Data/Steamcache volmode default default
Data/Steamcache snapshot_limit none default
Data/Steamcache snapshot_count none default
Data/Steamcache redundant_metadata all default
Data/Steamcache org.freebsd.ioc:active yes inherited from Data

I made an earlier post about an experiment I am running.  So far so good.  The eldest is having to put her games onto the new G drive her computer sees.  The magic of ISCSI makes it appear as a local hard drive even though it’s on a network server.  I am a HUGE fan of ISCSI and I use it as much as I can…especially when the storage is Linux or UNIX. I did notice that the transfer was maxing out at 650 megabit/second…i know that the machine can do used to do 2 gigabits/second when it was a backup target.  I wondered what has changed throughout the years?  I did a little bit of digging.  ZFS is all about data safety.  You have to be extremely determined to make it loose data for it to have a chance to do so.  sometimes that ultimate safety comes at the price of performance.  I started looking at the numbers and i noticed ram(32 gigs) was not a problem.  CPU usage was less than 20% max.  The disks however were maxed out.  Well it turns out that ZFS has a ZIL(ZFS Intent Log) that is always present.  If there is no ZIL SSD then it’s on the main drives.  I thought that double(or in this case triple) writing to the drives was it…but there.  I had to dig deeper and dug into the actual disk I?O calls.  It turns out that the default setting for synchronous writes defaults to the application level.  If the application says you must write synchronously…that means zfs will not report back that the write transaction was completed until it does both of it’s copies and verifies them on the array.  Loosely translated if you were to put this in RAID terms it would be a write-through.  Since ZFS is a COW filesystem I am not concerned about data getting corrupted when won’t(again unless you have built it wrong, configured it wrong…something like that)…so I found a setting and i disabled the forcing of synchronous writes.  I effectively turned my FreeNAS into a giant write-back caching drive.  Now the data gets dumped onto the FreeNAS server’s ram and the server says “i have it” and the client moves on to the next task..either another write request or something else.  Once I did that the disks went from maxing out at 25% usage to nearly 50% usage and the data transfers maxed out the gigabit connection.  That’s how it is supposed to be.

There are times for forcing synchronous writes…like databases, financials….anything where the data MUST verified as written before things are released.  that’s when you can force synchronous writes and use a ZIL drive.  This is an SSD(typically) that holds the writes as a cache(non-volatile) until the hard disks catch up.  The ZIL then grabs the data, verifies it’s integrity, and then tells the application the write has been accomplished(because it has) and then passes those writes to the array as a sequential set of files(something hard drives are much better at than random writes).  What’s eve nicer is that you can set the writing behavior per dataset or per zvol.  The entire file system doesn’t have to be one or the other and it doesn’t hurt the ZFS filesystem performance.  More as I figure it out with the ultimate question being…how do games perform when operated like this…stay tuned.

I came across an interesting use case for FreeNAS.  My eldest daughter likes games that are huge.  Like 100-250 gigabyte huge.  I simply cannot afford to keep adding SSD storage to her machine.  I will not do hard disks as main storage..under Windows 10 it’s too painfully slow.  What Lawerence had done was taken a FreeNAS machine, sliced off a portion of the raw storage, and presented it to the workstation as a hard drive over his network.  His son now run his large games from the FreeNAS zvol as if it was local.  What’s neat is the games initial load time is a bit slower(the NAS is hard drive based) but once it’s loaded..there’s no perceptible difference in gaming performance despite a constant stream of data from the server…usually less than 150 megabit/sec.  Since I have multi Terabytes of free space i am doing the same thing for my eldest.  I am also doing what is called thin provisioning so it initially starts at zero usage and goes up until she reaches her cap of 1 Terabyte.  Let’s see how this works as my quad core Xeon cpu is light years faster(with 4 times more ram at 32 gigabytes) than his FreeNAS mini dual core atom and 8 gigs of ram.  If this works…i have a new idea for future computer builds here at the house..<G>