BLOG

SAN vs NAS: The Ultimate Guide to Understanding the Two

November 8, 2019 |

San vs nas lumaforge

SAN vs. NAS the eternal question. How can the same three letters, in a different order, make such a huge difference in cost, performance, reliability, and general headachiness? Why are all computer systems so acronym heavy? Can’t everything just be plug and play, I mean, we’re pretty deep into the 21st century at this point, so what gives?

 

These are the questions that keep us up at night here at Lumaforge. Fortunately, we’ve lost sleep, so you don’t have to. Read below to find out the differences, and learn which system is right for you and your team.

 

We make shared storage for video teams, so we’re going to compare the two through the lens of video and audio post, rather than, you know… everything else that you put on a computer.

 

What is a SAN Server?

 

At its most basic level, a Storage Area Network is an additional network purpose-built just for moving large chunks of data to large disk arrays. SANs are more complicated than that though, generally consisting of three “layers” to create the system.

 

The Storage Layer is what it sounds like, all the disks, tapes or SSDs in whatever arrays that connect to the system.

 

They are connected through the Fabric Layer, the networking devices like switches, routers, and cables that connect the storage disks to the third layer, and to the individual client machines.

 

Finally, to run the SAN, there has to be a server or multiple servers (sometimes called metadata controllers) acting as a Host Layer. Basically, the host layer are air traffic controllers keeping data flowing in the “airspace” of the SAN. Without a host layer, there is nothing to control the files flying in and out of the storage layer, and files will literally crash into each other, creating data traffic jams in the fabric layer.

 

What is a NAS Server?

 

Network-Attached Storage, is a server that has storage built into it. Basically the host layer and the storage layer are built into the same box. This means that the server can either connect to the client machines through a switch, like a SAN, but much easier to manage, or even just connect directly to each client computer using ethernet cables, which reduces the fabric layer down to just some cables from your computers to the server.

 

How is a SAN different from NAS?

 

There are myriad differences between a SAN and a NAS, but unless you’re super-interested in learning about TCP/IP vs. FCoE, or SCSI vs. NFS/SMB/CIFS, or block data vs file systems (seriously engineers, what’s with the alphabet soup?) I’m just going to talk about the 3 major differences that you’re likely to see, as a person with a computer plugged into a network with some disks on it.

 

1) NAS will appear on your computer as either a server or a shared folder. SAN will appear as attached storage (like plugging in a Thunderbolt drive or a thumbdrive, but HUGE and hopefully fast.)

 

2) Most NAS implementations take everything, data and control protocols, over one ethernet cable using standard IP protocol. The speed of the cable and device connections determines the transfer speed between the server and the client machine.

 

SAN will usually require two networks set up to your computer. One for data, and one for metadata/systems control. The data network is usually connected via 4, 8, or 16 gigabit optical fiber, and the metadata network is usually standard 10/100/1000 ethernet.

 

3) Generally, a NAS is the more affordable option and can be easier to set up and maintain. SAN’s tend to be much more expensive and require ongoing management by dedicated IT personnel but they are oftentimes more scalable than a NAS might be and tend to be used when there are a very large number of connected client machines.

 

When would I use SAN vs. NAS?

 

SANs still have a place, especially in large organizations dealing with huge datasets. A large VFX house, for instance, is constantly moving gigantic video, 3D and image files around to hundreds or even thousands of artists’ machines. Large companies dealing with so many users and huge datasets still require the flexibility and speed that a SAN can deliver on that scale.

 

NAS applications, while perhaps more limited in scope, are still often used in large organizations as a local server for smaller teams within the group, or for large teams with less demanding applications, like spreadsheets and smaller databases.

 

But those defined use-cases are breaking down as ethernet installations of 10Gb become more and more common, and installations of up to 100Gb ethernet become significantly cheaper than similar Fiber Channel installations. The internal capabilities and scalability of internal NAS server systems have also seen dramatic improvement over the last decade, with systems that can scale in line with much larger SAN systems at a fraction of the cost.

 

Which is better in video workflows?

 

NAS in the past was generally limited by the speed of the connections used to talk to the server. While 10 megabit or 100 megabit ethernet was fine for sending spreadsheets back and forth, or looking up things in a database, or even working on software code, it’s woefully inadequate to work on even standard definition or 720p video, much less the HD/2K cinema or 4K HDR files that we commonly edit with and do graphics work on today.

 

This meant that, for a long time, to have the speed necessary to run demanding video workflows, computers had to be connected to 2 different networks, one of them a very expensive Fiber Channel network, to move the massive amounts of data necessary.

 

But that is no longer true, with fast ethernet and higher quality servers (cough, the Jellyfish, cough) on the market.

 

There are still use-cases for SAN in video and post-production workflows, but generally only in extremely large organizations, with users numbering in the high hundreds or thousands. For teams of a couple hundred or less, most of the time a NAS system (ahem, the Jellyfish) will save you money, and time.

 

Beside SAN and NAS what are other storage solutions I should know?

 

The third option which you’re most likely familiar with is DAS also known as Direct Attached Storage. Portable hard drive and desktop RAID arrays fall into this category, and they connect to a single client machine using USB, Thunderbolt, and before that FireWire or SAS. The cool kids call this type of workflow sneakernet (get it… like Ethernet but you’re having to walk drives around the office).

 

If you’re working alone or in a small team with only two or three editors, having direct-attached storage, like a G-Raid or Promise Pegasus or OWC Thunderbay (apologies to all the other fine vendors out there) on each machine is super common. Obviously sharing files across your team can be time consuming because of the 1-1 relationship between storage and computer, but there are some ways to network computers together on an internal network, and sharing files that way. I ran a three-person post shop that way for several years, and it worked fine, though we rarely were working on exactly the same media at the same time.

 

How it looks on your end.

 

NAS usually appears to the client machine as a server, and SAN just appears as a mountable drive.

 

The Jellyfish is configured to connect to your system, then present Shares as a drive that can be mounted to your system, essentially eliminating that difference, but retaining all the benefits that can be gained by using NAS. Namely, offloading file management to the server, instead of managing it client-side, or needing a separate file-server to run file management software.

 

SAN vs NAS Conclusion

 

SANs are great. Really. If you have someone that knows what they’re doing they can be incredibly fast, and agile and can support gigantic organizations. But you pay for it. Both in cost of the pieces to put one together, and in the billable hours of the IT person you have to have in your organization to maintain one. My first paid job in the film industry was setting up an XSAN and five edit bays at a broadcaster in Utah. And it was a titanic nightmare. The organization didn’t really know what they were going to do with the system, so it wasn’t on a deadline, and that’s the only thing that kept me from getting fired. I had a background in computer science and significant networking experience, and that (relatively) small system took me 6 weeks to figure out.

 

Fast forward 13 years and I work for a company that makes a NAS, the Jellyfish. I have gained a bit more IT experience in that time (mostly just swearing at AVID nexis servers).