Procom NetForce 3500
Synopsis:Procom NetForce 3500 Synopsis: Procom's NetForce is one of the best mid-range network-attached storage systems that we've seen. It's fast, it's flexible, it has excellent high-availability features…and it's a good value, compared to other enterprise-class devices in the terabyte range. If you're looking for the cheapest way to throw a few hundred gigabytes at a problem, it's probably overkill. But if you're looking for serious storage for a large department or even an entire company, this can fit the bill.
February 2002 | Procom Technology Inc.'s new high-end network-attached storage server, the NetForce 3500, is designed for medium-sized enterprises or large departmental data centers with storage needs in the low end of the terabyte range. Able to accommodate as many as 11TB worth of disk space via one or more high-speed Ethernet interfaces, the NetForce is an impressive machine: it has a competitive price, is relatively easy to set up and administer, and has features which can make it reliable to operate. The NetForce isn't as massive as giants like EMC's Clarion or IBM's Shark—you're not going to consolidate a huge business onto a Procom machine. But it's much more scalable than storage devices from NAS vendors like Maxtor and Quantum/Snap Appliances.
You might think of the NetForce 3500 as being a direct competitor to the large NAS servers from Network Appliance, though with a unique twist of its own that makes it more scalable, but also a little more difficult to manage.
NAS in Front, SAN in Back
Looked at one way, the NetForce is a network-attached storage server: It plugs into a Fast Ethernet or Gigabit Ethernet network, and end-users and servers can connect to it using Windows' Network Neighborhood or Unix's Network File System (NFS). From the back end, however, the NetForce uses Fibre Channel-based storage area network (SAN) to provide a lot more flexibility, scalability, and high-reliability features than a typical NAS device. The good news is Procom did an excellent job hiding the SAN features from end-users, and most of the time from network administrators. The bad news is that when adding or removing storage from the NetForce, or when configuring those high-availability features, those SAN features must be dealt with directly.
The base-level NetForce 3500 solutions consist of three hardware components. There's the main NAS server head—which Procom calls the "filer head"—a customized PC server with dual removable power supplies, a two-line LCD display, PCI slots for Ethernet interfaces, and a Fibre Channel connector coming out the back. There's a drive enclosure (popularly known in the storage industry as a JBOD, for "just a bunch of disks"), which has power supplies, a Fibre Channel connection, and room for 12 hot-swappable hard disks.
So far, this seems just like any large NAS appliance, such as those offered by market leader Network Appliance. But the difference is in the third piece: a Fibre Channel hub supplied by Vixel Corporation. The hub, which contains eight Fibre Channel interface slots, is used to connect the filer to the drive enclosure. By comparison, with Network Appliance the JBOD is connected directly to the filer head. The use of the Fibre Channel hub means that behind the scenes, the NetForce is a storage area network.
What's the benefit of this approach? For one thing, the Procom system can be expanded by connecting additional JBODs to the hub; each time you do so, that's 12 more hard drives, and at a current maximum of 180GB per hard drive, that's 2.16TB of storage you can add each time. If you run out of ports on the Fibre Channel hub, you can swap it out for a bigger one with 16 or 24 ports. Another benefit: If you're worried that your filer head might crash, and you want to add redundancy to the NetForce system, you can add a second head and connect it to the Fibre Channel hub too. That way, both filer heads can share the same JBODs and hard drives.
What about Network Appliance? In their system, there are multiple Fibre Channel interfaces directly on the filer head, and the JBODs connect to them directly. If there are six interfaces, you can connect six JBODs. If you want seven, buy a new filer head. And while Network Appliance offers the capability to cross-connect multiple NAS appliances for redundancy, the process is much more complicated and expensive than just connecting two filer heads to the same hub.
Another benefit, by the way, of the Fibre Channel hub is that a tape library might be attached to it, and used to back up the disk drives directly—without going through the filer head. Although Procom didn't provide a backup system with the review unit we tested (which had a single JBOD with six 73GB hard drives), they say that capability is built into the system.
Up and Running
When shipped to us from Procom, the NetForce 3500 was already partially configured, which the company says is its normal practice. The company also made a technician available to do the installation. We declined—you learn a lot more from setting it up yourself. That actually proved to be a pretty simple task, involving hooking up the Ethernet and Fibre Channel cables, plugging in the power, and throwing the switch. The hardest part actually proved to be unpacking the filer head and JBOD—they're large and heavy, and require two people to move safely.
We did have one small problem off the bat: the filer head refused to recognize our lab's DHCP (dynamic host configuration protocol) server, and thus couldn't obtain an IP address automatically. Once we figured out that this was the problem, we manually configured the IP address using the filer's two-line LCD readout and a series of push-buttons. The server booted and appeared on the network.
Next, we set it up in the lab. Procom's tech guys divided the six hard drives into two three-disk partitions; each volume had its three hard drives set up as a RAID 5 (redundant plus striping) disk array. We turned each partition into a single volume, each with at least 100GB usable space, calling them camden1 and camden2. They also configured the server with two Fast Ethernet and one Gigabit Ethernet LAN ports, at our request; we began working by connecting the NetForce to our test network via one of the Fast Ethernet ports.
The server is administered via a Java-based utility, which is loaded on a Windows workstation by browsing to the NetForce's IP address, downloading the utility, and then executing. Via this interface, we could view the two disk volumes and set up a large share filling the capacity of each volume; lacking imagination, we called them \\camden1\data1 and \\camden2\data2. That process only took a few moments, as did integrating those shares with a Windows NT domain; this allowed our Windows NT domain controller to set up access security. On systems without an NT domain, the NetForce can create and enforce its own ACL (access control lists).
With the shares configured for both Windows and NFS access, we were able to mount them from the Windows and Linux systems in our lab. After populating the \data share with about 20GB of test data, we then set the four clients to work copying data back and forth from the \data1to the \data2 volume and back again, generating enough to flood the Fast Ethernet port. At that point, the NetForce reported that its CPU was at about 11 percent utilization, showing that the processor could handle the load with ease.
Switching the NetForce to its Gigabit Ethernet port allowed us to generate about 340Mbps worth of bi-directional traffic, more than would be found in a normal organization's typical usage. The server claimed that its utilization peaked at 59 percent utilization. We were impressed with how well the server handled the increased traffic. End-users should be equally pleased using the device as a file server.
Tricky Fibre Channel
The only area where the NetForce becomes difficult to work with is when you're changing around its physical configuration, such as adding new drives to a JBOD, or adding a new JBOD, adding a tape system to the NetForce back end, or hooking up a second filer head for additional redundancy. That requires configuration of the SAN hub, which is done through its own Web-based configuration utility, and then making changes to the filer head's setup. This requires that the administrator really understand the vagaries of SCSI and Fibre Channel storage, such as LUN (logical unit) mapping, the differences between partitions and volumes, Fibre Channel device IDs, and so on. If the system administrator understands high-end storage technologies, great. If not, it's possible to totally screw up the system, even trashing all your data.
Should this be a deal breaker? No, not if the administrators working with the machine have a level of technical savvy. The added complexity is the price you pay for the SAN back end's extra flexibility. But it's something that buyers should be aware of.
Overall, we would judge the NetForce to be one of the best mid-range network-attached storage systems that we've seen. It's fast, it's flexible, it has excellent high-availability features… and it's a good value, compared to other enterprise-class devices in the terabyte range. If you're looking for the cheapest way to throw a few hundred gigabytes at a problem, it's probably overkill. But if you're looking for serious storage for a large department or even an entire company, this can fit the bill.
OTHER COMPANIES MENTIONED IN THIS ARTICLE
Maxtor Corporation www.maxtor.com
Network Appliance www.netapp.com
Quantum Corporation www.snapserver.com
Vixel Corporation www.vixel.com