web statisticsweb stats Business Phone Systems Tech Talk Forum - VOIP & Cloud Phone Help

Business Phone Systems

Previous Thread
Next Thread
Print Thread
Rate Thread
Page 1 of 2 1 2
Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
I have a rack that I'm rapidly filling up and am planning to migrate to a new hardware base. This prevents me with some technical challenges that I need to solve. I am not that up-to-date or familiar with rack/datacom products offered by vendors so I figured I would explain my situation here in hope I can get some insight.

So, without further a due, here is the problem in a nutshell.

The Backstory:
I have an enclosed rack that is 46-ish U (it's a weird rack, but I only pay for 44U). The front to back depth of the rack is up to 30". I have these servers that are 1U 19" Rackmount but only 9.8" deep. Other pertinent pieces of the story are four 48-port switches with 18"-depth and 4 20-outlet 0U PDU's (power strips) with 3-foot IEC computer cords..

The Plan:
I am going to re-locate the rails, and set the rail depth to 26" (2" from door to rail in front and back). I am then going to mount my 1U 19" 9.8"-depth servers in the front AND back of this rack. The four 48-port switches will be mounted with 2 on each side at the bottom. Since the switches are too long to be mounted with anything behind them they will be mounted with one facing the front, then the next U above that will face the back, and so on for the 4 switches (Stacked on top of each other). There will be a 1U horizontal wire management unit bolted behind each switch to cover the hole and provide support for the switch that is above or below it. The 0U PDU's will be bolted in the 6" or so of air-space between the servers and there will be a 3-foot power cord that plugs the server in.


The Problem:
All of these small 1U Servers have front-mounted ports with the exception of the power in the back. I am trying to find some sort of a vertical cable management for all the network cables this rack will have. Each server will have two Cat5 patch cables with each one running to one of the switches. Maybe some kind of an offset 1U cable ring that I can mount every 4U or so over the brackets of the servers. Keep in mind that this is in a colo-center so having wire management that clips on the outside of a rack wont work. The doors also need to be able to be shut and locked.

The rack is intentionally populated from bottom to top to make sure that all of those 84 servers have an air-column that is a straight-shot up (convection principle). There is also going to be a vented tile put under the cabinet as well as 4 120mm high-CFM muffin fans mounted in the top-plate of the cabinet. For those who are curious each one of those servers represent 150 phone lines of capacity. The rack will have 80-amps of service (four 20-amp circuits). The need to keep things neat and serviceable is paramount.

Since a picture is worth a thousand words I drew up something quick and dirty. Here it is: [Linked Image from azrael.crashsys.com]

Atcom VoIP Phones
VoIP Demo

Best VoIP Phones Canada


Visit Atcom to get started with your new business VoIP phone system ASAP
Turn up is quick, painless, and can often be done same day.
Let us show you how to do VoIP right, resulting in crystal clear call quality and easy-to-use features that make everyone happy!
Proudly serving Canada from coast to coast.

Joined: Feb 2006
Posts: 826
Member
Offline
Member
Joined: Feb 2006
Posts: 826
I had a very similar challenge with some of my cabinets. On some, I was able to move the front rails back about four inches and mount a vertical cable retainer. On the others, where I had less depth to work with, I used a fairly shallow finger duct. I can take some pictures to give you some ideas on Monday if you like.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
That would be great. I'm all for any idea that prevents me from just having cables hanging down the face of this thing.

The one thing that looked plausible were vertical lace strips from mid atlantic products and their velcro straps. That seems like it would be flexible enough to move when needed. Anyone have an experience with them and how well they do or dont work? https://www.middleatlantic.com/rackac/cablem/cablem.htm

Joined: Feb 2005
Posts: 12,342
Likes: 3
Member
***
Offline
Member
***
Joined: Feb 2005
Posts: 12,342
Likes: 3
Don't know about you but I have always kinda assumed that you put your equipment in the front of a rack and had access to it from the back. Did the laws of IT change that?

With equipment located on both sides how do you propose to get at the wiring in the middle? I would also give serious thought to the heat load of that many servers packed into that amount of space. I really don't think any amount of fans is going to cut it.

Why aren't you using at least two racks?

-Hal


CALIFORNIA PROPOSITION 65 WARNING: Some comments made by me are known to the State of California to cause irreversible brain damage and serious mental disorders leading to confinement.
Joined: Feb 2006
Posts: 409
Member
Offline
Member
Joined: Feb 2006
Posts: 409
Kumba, you didn't state the width of your rack. I'll assume that it is the typical 24" wide.
I would also be concerned with the heat load of those servers. It would seem to me that the switches overlapping at the bottom would restrict any airflow from entering the cabinet efficiently.
Kumba, you didn't state the width of your rack. I'll assume that it is the typical 24" wide. Here is a picture of what we have done in the past but it is a 32" cabinet. This is the solution that we came up with that did not take up any U's.

Todd

[Linked Image from i179.photobucket.com]


WWW.EmeryCommunications.com
Authorized SMB Avaya Business Partner
Joined: Jan 2009
Posts: 8
Member
Offline
Member
Joined: Jan 2009
Posts: 8
I know I'm new here, but Hal and Todd are quite correct. Installing hardware on both sides of the rack is asking for nothing except disaster.

First heat between the devices will not dissipate effectively (think toaster oven), creating a problem with effective air flow. (Air circulation is your best friend with enclosed racks)

Secondly I certainly would not like to be the person responcible for having to swap hardware on this setup. In order to get to the back of a switch or server, you're going to have to reach across a device just to get to the device you're after. Not to mention that if you're a level below you're now looking at a whole new problem (how to reach it).

Ever accidentally drop a plug end?
As the rack fills up with hardware placed in this manner, bringing a dropped power adapter back to the top will be "fun"..

Your best bet (as previously indicated by others) is to place your hardware on one side. This will not only make life easier with future maintenance, it will also provide better air flow, and cable routing options

Kerb

Joined: Oct 2007
Posts: 289
sph Offline
Member
Offline
Member
Joined: Oct 2007
Posts: 289
Kumba, please tell me where you found application servers that are only 10" deep?!?! I need those last year!!!

Secondly, I have to agree with Hal et al re:the positioning and heat. The normal practice in datacenters is to have all equipment facing one way. Apart from the wiring convenience and heat generated in the rack, the proper way is to have "hot" and "cold" aisles between racks, ie the heat venting sides of opposing racks are on the same aisle. That way the front of your rack never faces the back of another.
Also, if I may say, the rack positioning is non-standard: switches usually are placed above the servers, and panels (if any) above switches. You start at the bottom with the heavier equipment and, if weight difference is negligible, with equipment that has the largest power requirements. That's keeping with normal design of having comm wire on top (usually on ladder) and power wire on bottom.
Unless ofcourse you have infloor wiring, but this is generally frowned upon for equipment rooms, though it's ok for user space.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
This is all in a raised-floor data center. The lieberts use the floor as the plenum and the return is pulled from the ceiling. Power is fed from under the floor. I've already made plans to have a vented tile installed under the rack and they have agreed to open the damper up on it completely for me. What I kind of mentioned is that these are funny racks in that they have 46'ish U's of space but at the very bottom there is about 2.5" where the rails don't extend down. You open the front or back of the rack and it's deadspace at the very bottom. This gives me area for the cool air to come in and go up the sides, back, and front of the cabinet. The external width of the enclosed rack is 24". I don't remember the external depth of the rack but it seems like there was about 2" from the rails to the front and rear door when they were at their full 30" adjustment.

The rack is on the end of an isle and the side-panel is removable with the turn of two bolts. The only cable that will be in the center is just the power cables. Everything uses standard IEC-style cords. I hate power bricks and wall warts.

The Switches are HP ProCurve 2650's and they consume more power and are heavier then each of these servers. On my tests the measured power draw of a fully-loaded procurve was around 1.2-amps (135'ish watts). The measured power-draw of a fully-loaded server was 1.0-amps (115'ish watts). The ProCurves are actually heaver (about 10-pounds) then the servers (about 8-pounds).

I do have a second rack next to it but that is busy holding traditional 1U servers for the databases, archives, web servers, SIP/Media gateways, etc.

I definitely understand and appreciate all the concern about heat build-up within the rack. This whole model is currently in a proof of concept stage and if I determine it to be stable it will be how I approach high-density applications in the future. I actually have a very good relationship with the colo staff and they have a high degree of interest in this succeeding for their own hosted business model. If the current plan proves to be unstable we will switch to a forced-cooling option. This involves mounting a plate in the bottom of the cabinet with a 6" duct that attaches to a blower placed under the raised floor. The front and back door will have the screens replaced with plexi-glass and the plate on the top of the rack with the fans will be removed and replaced with a wire mesh. Basically converting the rack from horizontal (front-back) to vertical (bottom-top) airflow. The switches at the bottom will already act like an air-plenum and force the cold air up the sides and front-back door of the rack. The other thing to remember is that i'm trying to provision for 100% load. In reality things will typically be running at 60-75% of load.

If I use the formula BTU=(watts*.95)*3.414 I estimate that I'll use about 33080-BTU's or rounded up to 3-tons of HVAC. I need to double check with the facilities but I believe the floor I'm on has a much higher thermal allowance per rack. I know electrically they are provisioned to be able to do up to 200-amps per rack. I would assume it they could deliver that amount of power then they should be able to deliver enough HVAC to cool it as well.

The idea here is to create a RAIC or Redundant Array of Inexpensive Computers. This method (if it can be made to work) is cost-effective, high-density, and highly-efficient. This particular approach I know is being utilitized by companies such as Google in order to power their search engines.

sph: These are custom-built machines. Nothing from an OEM could meet my requirements. My requirements where 2-nodes per 1U, 1.5-amps or less per node, at least one dedicated USB Port per node (software-requirement), and sub $1000 per node. Nothing (blades, dual-node 1U chassis, etc) could meet all 4 things. Given that, I spent a lot of money and time testing and eventually arriving at my current recipe for a server. Each server is an Intel Quad-Core 2.4ghz machine with 4-gigs of ram and a Raid-1 160-gig setup.

Joined: Oct 2007
Posts: 289
sph Offline
Member
Offline
Member
Joined: Oct 2007
Posts: 289
Well, K, it seems you have done your homework. You mentioned Liebert, I assume they were the facility suppliers, and they are definitely quality datacenter vendors. I wish you all the best.
Thanx for the info on the servers, that is a very good price-point. The one thing with custom-built equipment is the time involved into testing the parts and the whole, and in the appropriate burn-in. Most of the premium from "name" manufacturers supposedly reflects just that.
I have done some work with dense high-availability clusters, on Windows Datacenter, Solaris, and (a long time ago...) on Tandem computers. But these utilized pretty expensive nodes to begin with. RAICs I don't have experience with. Please, post your results when you have the time.

Joined: Jun 2007
Posts: 2,106
Kumba Offline OP
Member
OP Offline
Member
Joined: Jun 2007
Posts: 2,106
Liebert was just the manufacturer of the environmental controls/HVAC. They're pretty much the standard in precision HVAC and data centers it seems.

And you are right about the time and money spent figuring all this out. I was at their data center looking at their infrastructure, taking down model #'s, measuring, etc, for almost a whole day. Then it took another 3 weeks of buying hardware, testing it, seeing what made it overheat, measuring power consumption under different loads, etc.

In retrospect the company could have bought me a new 4-door sedan with the money spent on this project. But the thing is that just Having a second rack in a data center is a $1000/mo proposition (on average). Having a single 20-amp circuit is a $400/mo deal. It all add's up real quick, specially when you are paying premium boku rates for the facility.

Page 1 of 2 1 2

Moderated by  Silversam 

Link Copied to Clipboard
Forum Statistics
Forums84
Topics94,262
Posts638,697
Members49,757
Most Online5,661
May 23rd, 2018
Popular Topics(Views)
211,112 Shoretel
187,717 CTX100 install
186,810 1a2 system
Newest Members
BPopilek, Rich F, LewisR, TDKs79, Buttinset
49,757 Registered Users
Top Posters(30 Days)
dexman 18
Toner 11
TDKs79 7
pvj 4
Who's Online Now
2 members (justbill, Curlycord), 55 guests, and 432 robots.
Key: Admin, Global Mod, Mod
Contact Us | Sponsored by Atcom: One of the best VoIP Phone Canada Suppliers for your business telephone system!| Terms of Service

Sundance Communications is not affiliated with any of the above manufacturers. Sundance Phone System Forums - VOIP & Cloud Phone Help
©Copyright Sundance Communications 1998-2024
Powered by UBB.threads™ PHP Forum Software 7.7.5