What Makes the NUCserver a Server?

The NUCserver project, which uses Intel’s NUC Mini PC’s as servers, began with a simple question: what is a server?


True Networks has been a NSP (Network Service Provider) for more than 5 years, providing services and rentals through the servers installed in our data centers. Like most hosting businesses, the largest portion of our operating costs go towards physical space and leased lines.


In case of our Japanese branch which called True Networks Japan which provides the same services as True networks, their operating cost is at least 4 time higher then Korea. We have pored over and applied various way to cost-cutting to become more profitable, but we have always hit the physical barrier.


Let’s return to the initial question: “what is a server?” According to Wikipedia, a server “is a system or program that responds to requests across a computer network to provide, or help to provide, a network or data service. According to this definition, a server, as a hardware, is a computer device that serves a specific function or a role from a remote location through a network, physically inaccessible unlike a PC or a workstation. As a server needs to perform continuously for long periods of time in an environment where the user or an administrator cannot manage it directly, reliability and remote manageability—biggest differentiators separating servers from PC’s and workstations—are of utmost importance to a server.


Of course, reliability is highly desirable for any PC or business PC, and high-performance enterprise workstations for jobs like graphics rendering, architecture and statistics are expected to have server-level reliability. However, it is not a requirement, and the need for remote management is relatively low.


A server in the mind of an average Joe is intimidating machinery, much more powerful and scalable than the average PC, but quite to the contrary, a steady stream of recent articles has been describing the use of devices built on low-power processors, like the Raspberry Pi, performing as servers with limited function.

By all means, large servers have an edge in computing capability, necessitated by tasks like cluster supercomputing, rendering graphics and statistics, since its processing power, memory, storage, and I/O can be expanded to the maximum. Nevertheless, the performance necessary for web, file, DNS or email servers for personal use or small businesses is relatively low, and the same goes for servers with limited roles.


The ideal server hardware therefore needs:

  1. Performance for handling the services to be provided to users over the network
  2. Reliability for continuous performance during the service term
  3. Remote manageability for emergencies, troubleshooting and maintenance


To get a little more greedy, we could add great price-performance ratio, low maintenance costs, and good technical support to the list. But expandability and high performance are still not requirements.

They are merely specifications that depend on the purpose of the server.


The NUCserver project began with this reexamination of what defines a server.

We here at True Networks have learned a thing or two in the past years:

  1. The average load of servers we operate or manage never goes above 10%. The fixation on performance is creating an excess.
  2. Customers demand high-performance servers with Scalability in mind, but less than 10% of our customers actually have needed upgrades due to expansions. In addition, the majority of customers who upgrade buy an additional server instead of adding storage or computing power.
  3. Although the demand for public cloud for services like virtual machines have been rising, customers often have second thoughts about its security and reliability in comparison to a private server. In these cases, they end up creating a private cloud, which often takes a lot more resources than expected.
  4. Even though physical, independent servers will no longer be mainstream and are in decline for high costs, many customers want to rent one or have one installed in our data centers.
  5. A decade of price war has wreaked havoc on the profit margin of most players in the virtual data center industry, and as they reach the conclusion that competing on service is essential, companies that have the luxury of affording R&D for new services and management systems are the few that were able to maintain their revenues. Cost cutting remains vital.
  6. In order to become more cost competitive, we need server systems with high space efficiency and low power consumption that will allow us to cut operating costs.


These are widely agreed upon, but let’s focus on number six.

As a solution to the power and space efficiency problem, many server manufacturer, including Dell, HP and Super Micro, have offered high-density blade servers, proving effective in many aspects. Many large-scale enterprises have adopted blade servers. Nonetheless, blade servers still have unsolved problems.


  1. Many enterprises adopt blade servers with the idea that the total cost of ownership (TCO) will be lower since the cost of adoption is similar to conventional servers while being more efficient in terms of power and space. This may be true, but maintenance costs are higher than conventional servers because proprietary components and specialized engineers for proprietary hardware prove to be expensive.
  2. Focusing on power efficiency and performance per unit of space has led to high computing power in each blade. This makes it difficult to respond to the demand for a private, moderate-level performance servers.
  3. The maximum power supply per unit space, set by the data center operator, is relatively low, which causes some problems when adopting blade servers. For example, when designing a general-purpose data center in Korea or Japan, the power supply limit per standard 42U rack is 220 V at 30 A, which is 6.6 kW (110 V at 60 A in Japan), but for safety, this is usually held back at around 3 kW. For more power supply, customers must individually enter a contract with higher fees. With this limit, considering that the average 4-6U 12-slot blade servers require between 1 and 2 kW, 12-20U, or three set of blade server is the maximum installation on a single rack. This leaves about half of a standard 42U rack empty, which can only be filled with additional power with additional fees (usually 1.5 times the normal fee). Ultimately, its space-power-cost efficiency is low.


These observations have led to the specifications for our server design, the solution to the aforementioned problems.


  1. Appropriate performance, enough to provide the services required by our clients.
  2. Reliability nearing or identical to conventional servers.
  3. Manageability nearing or identical to conventional servers.
  4. Staying within the power limit of 6 kW per rack while providing enough performance per module for an independent server. Achieving low TCO even after the additional fees by maximizing per-module power and space efficiency.
  5. Low maintenance costs by using standard components.


When Intel first revealed the NUC in 2012, we did contemplate using it as a server, only to abandon it shortly after. The primary reason was that IPMI-like functionality for remote power control and management was essential for a server. At the time, we were searching for a PicoITX-sized E3-1200 processor based product that had integrated IPMI, and we were even ready to become an ODM, design our own, or collaborate with existing server board manufacturers like Iwill and Tyan Korea.


DC53427HYE, a new NUC model released by Intel in Q4, 2013 included the vPro technology, which provided a robust management system that rivaled IPMI, with better security. It caught our attention. This particular model had the Sandy Bridge i5 processor, which was new at the time, and unlike other NUC devices, it was designed as an embedded device, providing a very long MTBF and reliability.


When we had our hands full trying to acquire a sample and implement our design, the market in 2014 was overflowing with Ivy Bridge models. But the model with the new processor and vPro technology was nowhere to be seen. It was hugely disappointing when we made an enquiry about the vPro model and learned that they were uncertain about future releases. We dropped the project soon after.


This model still had some flaws. The small size made it incompatible with standard 2.5” SSD’s and HDD’s. It came with 16 GB of memory, which made is suitable for some cases, but the capacity of mSATA SSD’s maxed out at a meager 256 GB.


In early 2015, Intel released a sequel to the NUC model with vPro, called the 5i5MYHE. This model was a step closer to the server we had in mind. Full of anticipation, we acquired a sample, and as a result, we rebooted the NUCserver project. We developed a standard and a proprietary rack system, applications and a power control device. In order to overcome a hardware limitation that doesn’t allow for remote desktop access, we also created a MiniDP to headless (a virtual monitor emulator) dongle.


Let me take you through what we’ve achieved in the past year.

  1. Reliability nearing or identical to conventional servers

→ Embedded design (MTBF 62,000 hours)

  1. Manageability nearing or identical to conventional servers

→ Development of a power/installation/management control system based on the vPro technology.

→ Development of the MiniDP to headless dongle

  1. Staying within the power limit of 6 kW per rack while providing enough performance per module for an independent server. Achieving low TCO even after the additional fees by maximizing per-module power and space efficiency.

→ Developing a rack system for equipping a standard 42U rack with 192 NUC’s and an IP-PDU with a power supply device, maximizing performance per watt while keeping below a 5 kW total.

→ Moderate performance using the Broadwell i5 processor and the latest HD Graphics technology.

  1. Low maintenance costs by using standard components.

→ Keeping the hardware standard using the NUC5i5MYHE kit

  1. Expandability (added)

→ Ability to install a 2.5” disk and dual NGFF modules (standard)

→ Ability to install a maximum of 32 GB of ECC memory (option)

→ Ability to install an extra gigabit Ethernet port (option)

Once the development of a NGFF-based (M.2) gigabit network interface is completed in Q1 2016 along with a proprietary daughter board and NUCserver brackets, the NUCserver will be nothing short of a true server.



답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요:

WordPress.com 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Google+ photo

Google+의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )


%s에 연결하는 중