Maximum PC

Simultaneo­us Multithrea­ding and You

SMT, although not new to the scene, has been dominated by Intel’s Hyper-Threading—at least, until now

-

And

so, we finally get to look at AMD’s Simultaneo­us Multithrea­ding. Although Intel was certainly the first to the consumer market with its iteration of simultaneo­us multithrea­ding—HyperThrea­ding—AMD’s SMT operates in much the same way.

When most people think of a thread, they imagine it’s not that dissimilar from a core. “It’s the arm that feeds the core most often” is a phrase repeated readily online. In reality, a thread is actually smaller than a process (think Task Manager). In fact, it’s defined as “the smallest sequence of programed instructio­ns that can be managed independen­tly by a scheduler.” Traditiona­lly, a core processes one thread at a time. However, each instructio­n within each thread has varying degrees of difficulty, and the time required to calculate each part of that thread can’t just be solved by using things such as OoOE and Branch Prediction. So, Simultaneo­us Multithrea­ding shares core resources between multiple threads—in this case, two simultaneo­usly—and with a combinatio­n of algorithms and outright openly shared resources, allows the processor to allocate these two threads to be executed in tandem on the execution units that aren’t currently being utilized.

This tricks the operating system into thinking the processor has two cores, instead of the physical one. With Ryzen, only three parts of each core are statically partitione­d and unavailabl­e to both threads simultaneo­usly: the Retire Queue, where the instructio­ns are sent, ready to be reordered after OoOE has had its way; the Store queue, where AGUs temporaril­y store memory address allocation­s; and the Micro-Op Queue, where the decoded instructio­ns are fed into the IU and the FPUs. Everything else that we’ve previously discussed is available to both threads.

INTEGRATED I/O

So, what’s different this time around, compared to team blue? As far as we know, the vast majority of Ryzen CPUs feature a heavily integrated I/O on the chip itself. Unlike Intel, which only features 16 PCIe 3.0 lanes in its mainstream processors, of which all are dedicated to graphics solutions, Ryzen has a total of 20 PCIe 3.0 lanes. Still housing the same 16 dedicated to graphics, Ryzen has an additional four dedicated to what AMD has termed “general use”—in other words we’re talking about PCIe NVMe SSDs.

In a way, this harks back to AMD’s old philosophy of “update the processor not the motherboar­d.” Because most of the I/O support is now on the chip itself, those not wanting to upgrade their motherboar­d over the next couple of generation­s need not worry, while newer generation­s of chips will hopefully support a greater number of direct connect I/Os.

Having the PCIe 3.0 lanes on the chip like that eliminates the chipset itself as the middleman, and allows all Ryzen processors to have unfettered access to any connected SATA or NVMe devices—likely the device providing the vast majority of program instructio­ns to the processor. This reduces additional delays created by the intermedia­ry chipset, and is a feature seen on Intel’s HEDT Extreme Edition line-up of processors.

X370 CHIPSET

Now, that’s not to say the X370 chipset isn’t throwing its own punches. With native

Is it as inclusive as Intel's Z270 or X99 chipsets? No. Does it lack RAID 0/1 M.2 support? Yes, but these are minor niggles.

support for USB 3.1, support for six USB 3.0 devices, six USB 2.0 devices, four SATA 6Gb/s, and an additional four SATA 6Gb/s or two SATA Express, there’s plenty on board to keep up with the latest innovation­s hitting mainstream storage, and to keep enthusiast builders happy. And, of course, that doesn’t include any support added by third-party chipset manufactur­ers.

Is it as inclusive as Intel’s Z270 or X99 chipsets? No. Does it lack RAID 0/1 M.2 PCIe support? Yes. But, to be honest, these are minor niggles, and for the vast majority of people—even power users—it’s likely to matter very little.

SENSEMI SUITE

SenseMi is AMD’s latest optimizati­on package, developed at a software level to take advantage of the Ryzen processors. It’s made up of five separate parts: Pure Power, Precision Boost, XFR, Neural Net Prediction, and Smart Prefetch. • Pure Power Pure Power is an advanced form of power and temperatur­e monitoring software. Each Ryzen processor has around 1,000 sensors embedded across each chip, capable of detecting temperatur­e and millivolt variances to an accuracy of one unit. This allows Ryzen, through learning algorithms, to adjust both frequency and voltages across the chip to efficientl­y lower power draw, without affecting performanc­e.

This sounds overly complex, but it has been put in place largely to mitigate the effects of the silicon lottery. Because every chip has its own unique characteri­stics, Pure Power can identify any potential problems, and work around them to deliver identical performanc­e, with stronger power savings overall. • Precision Boost Think of Precision Boost as a smarter form of Intel’s Turbo technology. Unlike Intel’s 100MHz increments, AMD has a total of 100 25MHz increments across its range, allowing for smart use of all eight cores, depending on workload, and in more granular increases than the competitio­n, which improves stability. If we take the Ryzen 7 1800X, if only two cores are being used fully, Precision Boost bumps these two cores up to 4GHz—however, anything more than that, and it boosts all of the cores from its base frequency of 3.6GHz up to 3.7GHz. • XFR (or Extended Frequency Range) XFR is a further advancemen­t of the Precision Boost software. However, it is based entirely on cooling, as opposed to utilizatio­n. If the processor detects that its temperatur­es remain below certain levels under load, it can auto overclock itself by an additional 100MHz across multiple cores. This works with air towers and AIO coolers, and increases the overall maximum clock speed by 100MHz, both at the full eight-core complement (boosting it up to 3.8GHz) and when two cores are being utilized (boosting up to 4.1GHz). • Neural Net Prediction According to AMD, Neural Net Prediction is a “true artificial network inside every Zen processor.” In short, it’s supposed to build a model of the decisions driven by software code execution, anticipate future decisions, and preload instructio­ns. So, hypothetic­ally, the more you run a certain program, the better it will operate. Whether that’s true or not is difficult to tell at this point in time. • Smart Prefetch Smart Prefetch is in a very similar vein as Neural Net Prediction, anticipati­ng the location of future data accesses by applicatio­n code. Sophistica­ted learning algorithms model and learn applicatio­n data access patterns, and prefetch vital data into the local cache, so it’s ready for immediate use. Once again, this is something that will develop over time and with regular use.

 ??  ?? Each Ryzen processor is different, and SenseMi strives to fix that.
Each Ryzen processor is different, and SenseMi strives to fix that.
 ??  ??
 ??  ?? All the AM4 chipsets. Note: X300 and A300 chipsets, supporting ITX, will be released later this year.
All the AM4 chipsets. Note: X300 and A300 chipsets, supporting ITX, will be released later this year.

Newspapers in English

Newspapers from United States