Open Compute Compliance and Interop

With multiple vendors of OpenCompute hardware popping up, the logical next step was to make sure they correctly implemented the specification. Two labels were created to indicate what sort of compatibility testing was performed: OCP Ready and OCP Certified. The OCP Ready sticker is free to use by vendors to indicate that their apparatus complies with the spec and that it is able to work within an OCP environment. The OCP Certified label however is only issued by approved testing facilities, of which there are just two: one located within the University Of Texas at San Antonio (UTSA), another in Taiwan at the Industrial Technology Research Institute (ITRI). The first vendors to become OCP certified were WiWynn and Quanta QCT, meanwhile AMD and Intel have also received certification for their reference systems.

The OCP Certification Center (ITRI) has published the certification specifications used for Leopard, Winterfell and Knox, along with the test kits, allowing device owners to run the tests themselves. The test specifications itemize the essential features of the hardware and describe the pass or fail condition for each individual test. The openness of the test kits allows OCP vendors to just pass al tests in their own facility, ship it out to the validation center, pay for validation testing, and leave with OCP certified gear.

How The Open Compute Project Will Impact Your Datacenter

Facebook's initiative caused quite a ripple through the large-volume datacenter equipment vendors. With Microsoft joining as a second major contributor by way of donating its Open Cloud Server suite, the Open Compute project has gained substantial momentum. Currently we've seen a large part of the OCP contributions come from parties who designed hardware themselves to operate more efficiently, but aren't vendors in the hardware space themselves, like Facebook and Microsoft.

Meanwhile others have contributed information on various topics such as Shingled Magnetic Recording disks and how racks are tested for rigidity and stability. And a next step for the vendor community would be to bring more openness to bog-standard parts (hard drives, network controllers, ...).

The result of OCP is that innovative ideas in hardware and datacenter design are quickly being tested in the real world and ultimately standardized. This is in significant contrast to the "non-OCP world", where brilliant ideas are mostly PowerPoint slides and only materialize as proprietary solutions for those with deep pockets. The Open Rack innovation is a perfect example of this: we have seen lots of presentations of rack consolidated power supply and cooling, but the innovative solution was only available as closed expensive proprietary systems with lots of limitations and vendor lock-in solutions. Thanks to OCP, this kind of innovation not only becomes available to a much larger public, but we get compatible, standardized products (i.e. servers that can plug into the power shelves of those racks) from multiple vendors.

 

P.S. Note from Johan: Wannes De Smet is my colleague at the Sizing Servers Lab (Howest), who has done a lot of research work around OpenCompute.

 

Open Compute Hardware Availability
Comments Locked

26 Comments

View All Comments

  • Black Obsidian - Tuesday, April 28, 2015 - link

    I've always hoped for more in-depth coverage of the OpenCompute initiative, and this article is absolutely fantastic. It's great to see a company like Facebook innovating and contributing to the standard just as much as (if not more than) the traditional hardware OEMs.
  • ats - Tuesday, April 28, 2015 - link

    You missed the best part of the MS OCS v2 in your description: support for up to 8 M.2 x4 PCIe 3.0 drives!
  • nmm - Tuesday, April 28, 2015 - link

    I have always wondered why they bother with a bunch of little PSU's within each system or rack to convert AC power to DC. Wouldn't it make more sense to just provide DC power to the entire room/facility, then use less expensive hardware with no inverter to convert it to the needed voltages near each device? This type of configuration would get along better with battery backups as well, allowing systems to run much longer on battery by avoiding the double conversion between the battery and server.
  • extide - Tuesday, April 28, 2015 - link

    The problem with doing a datacenter wide power distribution is that at only 12v, to power hundreds of servers you would need to provide thousands of amps, and it is essentially impossible to do that efficiently. Basicaly the way FB is doing it, is the way to go -- you keep the 12v current to reasonable levels and only have to pass that high current a reasonable distance. Remember 6KW at 12v is already 500A !! And thats just for HALF of a rack.
  • tspacie - Tuesday, April 28, 2015 - link

    Telcos have done this at -48VDC for a while. I wonder did data center power consumption get too high to support this, or maybe just the big data centers don't have the same continuous up time requirements ?
    Anyway, love the article.
  • Notmyusualid - Wednesday, April 29, 2015 - link

    Indeed.

    In the submarine cable industry (your internet backbone), ALL our equipment is -48v DC. Even down to routers / switches (which are fitted with DC power modules, rather than your normal 100 - 250v AC units one expects to see).

    Only the management servers run from AC power (not my decision), and the converters that charge the DC plant.

    But 'extide' has a valid point - the lower voltage and higher currents require huge cabling. Once a electrical contractor dropped a piece of metal conduit from high over the copper 'bus bars' in the DC plant. Need I describe the fireworks that resulted?
  • toyotabedzrock - Wednesday, April 29, 2015 - link

    48 v allows 4 times the power at a given amperage.
    12vdc doesn't like to travel far and at the needed amperage would require too much expensive copper.

    I think a pair of square wave pulsed DC at higher voltage could allow them to just use a transformer and some capacitors for the power supply shelf. The pulses would have to be directly opposing each other.
  • Jaybus - Tuesday, April 28, 2015 - link

    That depends. The low voltage DC requires a high current, and so correspondingly high line loss. Line loss is proportional to the square of the current, so the 5V "rail" will have more than 4x the line loss of the 12V "rail", and the 3.3V rail will be high current and so high line loss. It is probably NOT more efficient than a modern PS. But what it does do is move the heat generating conversion process outside of the chassis, and more importantly, frees up considerable space inside the chassis.
  • Menno vl - Wednesday, April 29, 2015 - link

    There is already a lot of things going on in this direction. See http://www.emergealliance.org/
    and especially their 380V DC white paper.
    Going DC all the way, but at a higher voltage to keep the demand for cables reasonable. Switching 48VDC to 12VDC or whatever you need requires very similar technology as switching 380VDC to 12VDC. Of-course the safety hazards are different and it is similar when compared to mixing AC and DC which is a LOT of trouble.
  • Casper42 - Monday, May 4, 2015 - link

    Indeed, HP already makes 277VAC and 380VDC Power Supplies for both the Blades and Rackmounts.

    277VAC is apparently what you get when you split 480vAC 3phase into individual phases..

Log in

Don't have an account? Sign up now