vacancies advertise contact news tip The Vault
Win with Seagate - Seagate FireCuda 510 SSD! [x]
facebook rss twitter

Intel says the 'The Element' modular computer is "the future"

by Mark Tyson on 8 October 2019, 10:11

Tags: Intel (NASDAQ:INTC)

Quick Link: HEXUS.net/qaeele

Add to My Vault: x

At an event in London yesterday Intel presented a computer which, in a slide, it humbly describes as "the future". It shows the new computer on a timeline going beyond its NUCs and other Compute Element devices, and referred to it in discussions with Anandtech writer Dr. Ian Cutress as - 'The Element'. At this early stage the product looks like a bulky twin slot, and rather tall, PCIe card. Nevertheless, Cutress was enthused, harking back to CES 1014 - "Behold, Christine is real, and it’s coming soon," wrote the editor.

You can see the pictures but what is 'The Element' in tech terms? Inside the PCIe card shroud is a BGA Xeon processor, accompanied by two M.2 slots, two slots for SO-DIMM LPDDR4 memory, and a single fan cooler. The card motherboard provides plenty of I/O on its bracket including; two Ethernet ports, four USB ports, a HDMI video output from the Xeon integrated graphics, and two Thunderbolt 3 ports. WI-Fi is also built-in, says the report. The Element had an additional 8-pin PCIe connector.

Intel plans to bundle The Element with a backplane (a PCB with multiple PCIe slots, which would also be a power source) so you could use multiples of these cards and/or pair them with discrete GPUs, FPGAs, Raid controllers and other component choices.

Board partners will be able to customise The Element, but by not much more than changing coolers and backplates, suggests the source report. OEMs will get the key components in Q1 2020 and we should see product developments shortly after that, perhaps ahead of Computex.



HEXUS Forums :: 13 Comments

Login with Forum Account

Don't have an account? Register today!
1970's called, they want their S100 bus topology back.

But that's the bit that baffles me, PCIe isn't strictly speaking a bus and if you put a CPU on a card it has x16 lanes to talk to the back plane which then have to be multiplexed or divided up to the other slots giving a choice of latency or bandwidth penalty. Put the CPU on the backplane, you could call that a “motherboard”, and you get as many lanes to the PCIe slots as you want.

So AFAICS this is a way of limiting power, cooling & expansion all while driving up cost. Someone please explain what I have missed?!?!
Nope I had the same thought - I don't see what they are trying to achieve with this. Less this is aimed strictly at the enterprise market in some capacity…?
So basically a ‘blade server’ that fits in/on another pc rather….

I can see the logic of a low power mitx server pc along with a ‘gaming pc’ inside one case(case etc allowing), I can see the benefit of specialised processors to speed up certain functions such a encoding/decoding, but I just can't see the reason for a (low power) pc that needs another pc for it to run in the first place… I might as well just use the main pc that is on.

Only vague use case I can think of is multiple users with their own desktop but with multicore cpu's and virtualisation I'm not sure that's really needed either (linus tech tips did a video on this and showed it's viable)
DanceswithUnix
1970's called, they want their S100 bus topology back.

But that's the bit that baffles me, PCIe isn't strictly speaking a bus and if you put a CPU on a card it has x16 lanes to talk to the back plane which then have to be multiplexed or divided up to the other slots giving a choice of latency or bandwidth penalty. Put the CPU on the backplane, you could call that a “motherboard”, and you get as many lanes to the PCIe slots as you want.

So AFAICS this is a way of limiting power, cooling & expansion all while driving up cost. Someone please explain what I have missed?!?!

What I was thinking but you have the actual knowledge to back it up.
This feels like such a wasteful design method compared to a blade rack with networking backplane. Unless Intel are dramatically increasing their PCIe backplane interconnects, how many of these would you even be able to fit in a standard rack/tower chassis?