Information technologi

News

  • 6 Best Interactive Voice Response (IVR) Software of 2024
    by Corry Cummings on 26. July 2024 at 23:02

    The best IVR systems are designed to easily manage inbound calls at scale, but can also be a solution for the needs of a small business. Check out the top six software to see which is the best fit for you.

  • Intel 13th and 14th Gen 'Raptor Lake' instability troubles: Everything you need to know
    by [email protected] (Jeremy Kaplan) on 26. July 2024 at 20:47

    Reports of instability problems with Intel's 13th and 14th Gen 'Raptor Lake' CPUs began appearing more frequently in early 2024. Intel began serious investigation into the reports and appears to be close to issuing a fix, though it likely won't help already impacted processors. Here's everything you need to know.

  • RingCentral Review (2024): Is It Right For Your Business?
    by Corry Cummings on 26. July 2024 at 20:08

    RingCentral is one of the most powerful all-in-one communication platforms for midsize and large businesses. Learn more about the key features and pricing in our in-depth review, plus find alternative call center software options for small businesses.

  • ASRock Launches Passively Cooled Radeon RX 7900 XTX & XT Cards for Servers
    on 26. July 2024 at 20:00

    As sales of GPU-based AI accelerators remain as strong as ever, the immense demand for these cards has led to some server builders going off the beaten path in order to get the hardware they want at a lower price. While both NVIDIA and AMD offer official card configurations for servers, the correspondingly high price of these cards makes them a significant financial outlay that some customers either can't afford, or don't want to pay. Instead, these groups have been turning to buying up consumer graphics cards, which although they come with additional limitations, are also a fraction of the cost of a "proper" server card. And this week, ASRock has removed another one of those limitations for would-be AMD Radeon users, with the introduction of a set of compact, passively-cooled Radeon RX 7900 XTX and RX 7900 XT video cards that are designed to go in servers. Without any doubts, ASRock's AMD Radeon RX 7900 XTX Passive 24GB and AMD Radeon RX 7900 XT Passive 20GB AIBs are indeed graphics cards with four display outputs and based on the Navi 31 graphics processor (with 6144 and 5376 stream processors, respectively), so they can output graphics and work both with games and professional applications. And with TGPs of 355W and 315W respectively, these cards aren't underclocked in any way compared to traditional desktop cards. However, unlike a typical desktop card, the cooler on these cards is a dual-slot heatsink without any kind of fan attached, which is meant to be used with high-airflow forced-air cooling. All-told, ASRock's passive cooler is pretty capable, as well; it's not just a simple aluminum heatsink. Beneath the fins, ASRock has gone with a vapor chamber and multiple heat pipes to distribute heat to the rest of the sink. Even with forced-air cooling in racked servers, the heatsink itself still needs to be efficient to keep a 300W+ card cool with only a dual-slot cooler – and especially so when upwards of four of these cards are installed side-by-side with each other. To make the boards even more server friendly, these cards are equipped with a 12V-2×6 power connector, a first for the Radeon RX 7900 series, simplifying installation by reducing cable clutter. Driving the demand for these cards in particular is their memory configuration. With 24GB for the 7900 XTX and 20GB for the 7900 XT is half as much (or less) memory than can be found on AMD and NVIDIA's high-end professional and server cards, AMD is the only vendor offering consumer cards with this much memory for less than $1000. So for a memory-intensive AI inference cluster built on a low budget, the cheapest 24GB card available starts looking like a tantalizing option. Otherwise, ASRock's Radeon RX 7900 Passive cards distinguish themselves from AMD's formal professional and server cards by what they're not capable of doing: namely, remote professional graphics or other applications that need things like GPU partitioning. These parts look to be aimed at one application only, artificial intelligence, and are meant to process huge amounts of data. For this purpose, their passive coolers will do the job and the lack of ProViz or VDI-oriented drives ensure that AMD will leave these lucrative markets for itself.

  • US, UK and EU Make Joint Statement on Fostering AI Competition
    by Megan Crouse on 26. July 2024 at 18:55

    The AI industry is a small world – and multiple jurisdictions are determined to keep an eye on exactly how small.

  • What Is Hybrid Project Management?
    by Brandon Woods on 26. July 2024 at 18:34

    Explore the key features and benefits of hybrid project management, and create a step-by-step plan to choose features, implement tools and train your team.

  • Nvidia RTX 3050 A Laptop GPU specs revealed and it's as weak as expected — comes with just 1,768...
    on 26. July 2024 at 18:30

    Nvidia has apparently found a use for Ada Lovelace chips that don't meet QC standards. The new RTX 3050 A Laptop GPU uses an AD106 die, with over half of the cores disabled and only two of the potential four memory channels.

  • The Top 5 Apollo Alternatives for 2024
    by Allyssa Haygood-Taylor on 26. July 2024 at 18:00

    Explore the best database alternatives to Apollo.io for effective B2B prospecting and lead generation.

  • Intel 13th Gen CPUs allegedly have 4X higher return rate than the prior gen — retailer stats also...
    on 26. July 2024 at 16:40

    A European retailer reported that Intel processor RMAs have jumped from 1.75% to as much as 7% in recent years. Given the ongoing reports of long-term degradation issues with the 13th and 14th Gen Intel Raptor Lake CPUs, this could be just the tip of the iceberg.

  • Game dev adds in-game crash warning for 13th and 14th Gen Intel CPUs — link provides affected...
    on 26. July 2024 at 15:52

    Alderon Games has incorporated a Raptor Lake-specific crash message for customers' gaming PCs. This message notifies gamers about Raptor Lake's instability problems if the game crashes.

  • The 6 Best SIP Trunk Providers of 2024
    by Corry Cummings on 26. July 2024 at 15:44

    SIP trunking is a cost-effective option for modernizing legacy phone systems with minimal disruption. Discover the best SIP trunking providers and how to choose one in our complete buyer’s guide.

  • Get this $10 budget Lenovo speaker setup for your laptop or dorm room
    on 26. July 2024 at 15:40

    This budget Lenovo speaker deal will only set you back $10. Perfect for a low-cost audio solution for a small space.

  • Nvidia and partners could charge up to $3 million per Blackwell server cabinet — analysts project...
    on 26. July 2024 at 14:58

    Morgan Stanley says AI server makers can earn $210 billion on AI servers next year.

  • Secure Boot key compromised in 2022 is still in use in over 200 models — an additional 300 more...
    on 26. July 2024 at 14:55

    Software security firm Binarly discovered that over 200 device models used a compromised security key, while an additional 300 more used default test keys shared with nearly all of AMI's customers.

  • ASRock introduces passively-cooled RX 7900 XT and XTX with vapor chamber heatsink and slightly...
    on 26. July 2024 at 14:51

    The ASRock Passive Series GPUs have slightly lower clocks and performance than their air-cooled counterparts

  • Cyberdore 2064 Cyberdeck features an oversized scroll wheel, handle, OLED display, and Raspberry Pi...
    on 26. July 2024 at 13:01

    The Cyberdore 2064 cyberdeck leverages a Raspberry Pi Zero and supplementary Pi Pico inside a custom 3D-printed enclosure, flanked by a giant scroll wheel.

  • Supercomputing icon warns that China could have the world's fastest supercomputers
    on 26. July 2024 at 12:51

    China's government no longer wants to disclose performance of supercomputers, which creates suspicions in the U.S.

  • Master Cybersecurity With The Complete CompTIA Security+ SY0-701 Certification Kit by IDUNOVA
    by TechRepublic Academy on 26. July 2024 at 8:42

    Prepare for your cybersecurity certification with comprehensive study materials (including 30 hours of videos and hands-on labs) and expert guidance.

  • OpenAI Goes For Google With Search Engine Prototype
    by Megan Crouse on 25. July 2024 at 21:59

    SearchGPT combines AI-generated content with human-written articles in a search engine format that can handle follow-up questions.

  • The 6 Best Call Center Software of 2024
    by Corry Cummings on 25. July 2024 at 21:08

    Struggling to choose the best call center software for your business? Compare top options to find the solution that fits both your team and budget.

  • SK hynix to Enter 60 TB SSD Club Next Quarter
    on 25. July 2024 at 21:00

    SK hynix this week reported its financial results for the second quarter, as well as offering a glimpse at its plans for the coming quarters. Notably among the company's plans for the year is the release of a SK hynix-branded 60 TB SSD, which will mark the firm's entry into the ultra-premium enterprise SSD league. "SK hynix plans to expand sales of high-capacity eSSD and lead the market in the second half with 60TB products, expecting eSSD sales to be more than quadrupled compared to last year," a statement by SK hynix reads. Currently there are only two standard form-factor 61.44 TB SSDs on the market: the Solidigm D5-P5336 (U.2/15mm and E1.L), and the Samsung BM1743 (U.2/15mm and E3.S). Both are built from a proprietary controller (Solidigm's controller still carries an Intel logotype) with a PCIe 4.0 x4 interface and use QLC NAND for storage. SK hynix's brief mention of the drive means that tere aren't any formal specifications or capabilities to discuss just yet. But it is reasonable to assume that the company will use its own QLC memory for their ultra-high-capacity drives. What's more intriguing are which controller the company plans to use and how it is going to position its 60 TB-class SSD. Internally, SK hynix has access to multiple controller teams, both of which have the expertise to develop an enterprise-grade controller suitable for a 60 TB drive. SK hynix technically owns Solidigm, the former Intel SSD and NAND unit, giving SK hynix the option of using Solidigm's controller, or even reselling a rebadged D5-P5336 outright. Alternatively, SK hynix has their own (original) internal SSD team, who is responsible for building their well-received Aries SSD controller, among other works. Ultra-high-capacity SSDs for performance demanding read-intensive storage applications, such as AI inference on the edge or content delivery networks, is a promising premium market. So SK hynix is finding itself highly incentivized to enter it with a compelling offering. 

  • Microsoft Tests Out Adding New Generative AI Search to Bing
    by Megan Crouse on 25. July 2024 at 19:30

    “A small percentage of user queries” will show AI-generated results in the initial test, Microsoft announced on Thursday.

  • AMD Delays Ryzen 9000 Launch 1 to 2 Weeks Due to Chip Quality Issues
    on 24. July 2024 at 22:00

    AMD sends word this afternoon that the company is delaying the launch of their Ryzen 9000 series desktop processors. The first Zen 5 architecture-based desktop chips were slated to launch next week, on July 31st. But citing quality issues that are significant enough that AMD is even pulling back stock already sent to distributors, AMD is delaying the launch by one to two weeks. The Ryzen 9000 launch will now be a staggered launch, with the Ryzen 5 9600X and Ryzen 7 9700X launching on August 8th, while the Ryzen 9 9900X and flagship Ryzen 9 9950X will launch a week after that, on August 15th. The exceptional announcement, officially coming from AMD’s SVP and GM of Computing and Graphics, Jack Huynh, is short and to the point. Ahead of the launch, AMD found that “the initial production units that were shipped to our channel partners did not meet our full quality expectations.” And, as a result, the company has needed to delay the launch in order to rectify the issue. Meanwhile, because AMD had already distributed chips to their channel partners – distributors who then filter down to retailers and system builders – this is technically a recall as well, as AMD needs to pull back the first batch of chips and replace them with known good units. That AMD has to essentially take a do-over on initial chip distribution is ultimately what’s driving this delay; it takes the better part of a month to properly seed retailers for a desktop CPU launch with even modest chip volumes, so AMD has to push the launch out to give their supply chain time to catch up. For the moment, there are no further details on what the quality issue with the first batch of chips is, how many are affected, or what any kind of fix may entail. Whatever the issue is, AMD is simply taking back all stock and replacing it with what they’re calling “fresh units.” AMD Ryzen 9000 Series Processors Zen 5 Microarchitecture (Granite Ridge) AnandTech Cores / Threads Base Freq Turbo Freq L2 Cache L3 Cache Memory Support TDP Launch Date Ryzen 9 9950X 16C/32T 4.3GHz 5.7GHz 16 MB 64 MB DDR5-5600 170W 08/15 Ryzen 9 9900X 12C/24T 4.4GHz 5.6GHz 12 MB 64 MB 120W Ryzen 7 9700X 8C/16T 3.8GHz 5.5GHz 8 MB 32 MB 65W 08/08 Ryzen 5 9600X 6C/12T 3.9GHz 5.4GHz 6 MB 32 MB 65W Importantly, however, this announcement is only for the Ryzen 9000 desktop processors, and not the Ryzen AI 300 mobile processors (Strix Point), which are still slated to launch next week. A mobile chip recall would be a much bigger issue (they’re in finished devices that would need significant labor to rework), but also, both the new desktop and mobile Ryzen processors are being made on the same TSMC N4 process node, and have significant overlap due to their shared use of the Zen 5 architecture. To be sure, mobile and desktop are very different dies, but it does strongly imply that whatever the issue is, it’s not a design flaw or a fabrication flaw in the silicon itself. That AMD is able to re-stage the launch of the desktop Ryzen 9000 chips so quickly – on the order of a few weeks – further points to an issue much farther down the line. If indeed the issue isn’t at the silicon level, then that leaves packaging and testing as the next most likely culprit. Whether that means AMD’s packaging partners had some kind of issue assembling the multi-die chips, or if AMD found some other issue that warrants further checks remains to be seen. But it will definitely be interesting to eventually find out the backstory here. In particular I’m curious if AMD is being forced to throw out the first batch of Ryzen 9000 desktop chips entirely, or if they just need to send them through an additional round of QA to pull bad chips. It’s also interesting here that AMD’s new launch schedule has essentially split the Ryzen 9000 stack in two. The company’s higher-end chips, which incorporate two CCDs, are delayed an additional week over the lower-end units with their single CCD. By their very nature, multi-CCD chips require more time to validate (there’s a whole additional die to test), but they also require more CCDs to assemble. So it’s a toss-up right now whether the additional week for the high-end chips is due to a supply bottleneck, or a chip testing bottleneck. The silver lining to all of this, at least, is that AMD found the issue before any of the faulty chips made their ways into the hands of consumers. Though the need to re-stage the launch still throws a rather large wrench into marketing efforts of AMD and their partners, a post-launch recall would have been far more disastrous on multiple levels, not to mention that it would have given the company a significant black eye. Something that arch-rival Intel is getting to experience for themselves this week. In any case, this will certainly go down as one of the more interesting AMD desktop chip launches – and the chips haven’t actually made it out the door yet. We’ll have more on the subject as further details are released. And look forward to chip reviews soon – just not on July 31st as originally planned. We appreciate the excitement around Ryzen 9000 series processors. During final checks, we found the initial production units that were shipped to our channel partners did not meet our full quality expectations. Out of an abundance of caution and to maintain the highest quality experiences for every Ryzen user, we are working with our channel partners to replace the initial production units with fresh units. As a result, there will be a short delay in retail availability. The Ryzen 7 9700X and Ryzen 5 9600X processors will now go on sale on August 8th, and the Ryzen 9 9950X and Ryzen 9 9900X processors will go on-sale on August 15th. Apologies for the delay. We pride ourselves in providing a high quality experience for every Ryzen user, and we look forward to our fans having a great experience with the new Ryzen 9000 series. -AMD SVP and GM of Computing and Graphics, Jack Huynh

  • Micron Launches 9550 PCIe Gen5 SSDs: 14 GB/s with Massive Endurance
    on 24. July 2024 at 17:10

    Micron has introduced its Micron 9550-series SSDs, which it claims are the fastest enterprise drives in the industry. The Micron 9550 Pro and 9550 Max SSDs with a PCIe 5.0 x4 interface promise unbeatable performance amid enhanced endurance and power efficiency, which will be particularly beneficial for data centersMicron's 9550-series solid-state drives are based on a proprietary NVMe 2.0b controller with a PCIe Gen5 x4 interface and 232-layer 3D TLC NAND memory. The drives will be available in capacities ranging from 3.2 TB to 30.72 TB with one or three drive writes per day endurance as well as U.2, E1.S, and E3.S form factors to cater to the requirements of different types of servers. As far as performance is concerned, the Micron 9550 NVMe SSD boasts impressive metrics, including up to sustainable 14.0 GB/s sequential read speeds and 10.0 GB/s sequential write speeds, which is higher compared to the peak performance offered by Samsung's PM1743 SSDs. For random operations, it achieves 3,300 million IOPS in random reads and 0.9 million IOPS in random writes, again surpassing competitor offerings. Micron says that power efficiency is another standout feature of its Micron 9550 SSD: It consumes up to 81% less SSD energy per terabyte transferred with NVIDIA Magnum IO GPUDirect Storage and up to 35% lower SSD power usage in MLPerf benchmarks compared to rivals. Considering that we are dealing with a claim by the manufacturer itself, the numbers should be taken with caution. Micron 9550 NVMe Enterprise SSDs   9550 PRO 9550 MAX Form Factor U.2, E1.S, and E3.S U.2, E1.S Interface PCIe 5.0 x4 NVMe 2.0b Capacities 3.84 TB 7.68 TB 15.36 TB 30.72 TB 3.2 TB 6.4 TB 12.8 TB 25.6 TB NAND Micron 232L 3D TLC Sequential Read up to 14,000 MBps Sequential Write up to 10,000 MBps Random Read (4 KB) up to 3.3M IOPS Random Write (4 KB) up to 900K IOPS Power Operating Read: up to 18W Write: up to 16W Idle ? W ? W Write Endurance 1 DWPD 3 DWPD Warranty 5 year" "The Micron 9550 SSD represents a giant leap forward for data center storage, delivering a staggering 3.3 million IOPS while consuming up to 43% less power than comparable SSDs in AI workloads such as GNN and LLM training", said Alvaro Toledo, vice president and general manager of Micron's’s Data Center Storage group. "This unparalleled performance, combined with exceptional power efficiency, establishes a new benchmark for AI storage solutions and demonstrates Micron’s unwavering commitment to spearheading the AI revolution." Micron traditionally offers its high-end data center SSDs in different flavors: the Micron 9550 Pro drives for read-intensive applications are set to be available in 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB capacities with one drive writes per day (DWPD) endurance rating, whereas the Micron 9550 Max for mixed-use are set to be available in 3.2 TB, 6.4 TB, 12.8 TB, and 25.6 TB capacities with three DWPD endurance rating. All drives comply with the OCP 2.0 r21 standards and OCP 2.5 telemetry. They also feature SPDM 1.2 and FIPS 140-3 security, secure execution environment, and self-encrypting drive options. Micron has not touched upon the pricing of the new drives as it depends on volumes and other factors.

  • JEDEC Plans LPDDR6-Based CAMM, DDR5 MRDIMM Specifications
    on 23. July 2024 at 23:00

    Following a relative lull in the desktop memory industry in the previous decade, the past few years have seen a flurry of new memory standards and form factors enter development. Joining the traditional DIMM/SO-DIMM form factors, we've seen the introduction of space-efficient DDR5 CAMM2s, their LPDDR5-based counterpart the LPCAMM2, and the high-clockspeed optimized CUDIMM. But JEDEC, the industry organization behind these efforts, is not done there. In a press release sent out at the start of the week, the group announced that it is working on standards for DDR5 Multiplexed Rank DIMMs (MRDIMM) for servers, as well as an updated LPCAMM standard to go with next-generation LPDDR6 memory. Just last week Micron introduced the industry's first DDR5 MRDIMMs, which are timed to launch alongside Intel's Xeon 6 server platforms. But while Intel and its partners are moving full steam ahead on MRDIMMs, the MRDIMM specification has not been fully ratified by JEDEC itself. All told, it's not unusual to see Intel pushing the envelope here on new memory technologies (the company is big enough to bootstrap its own ecosystem). But as MRDIMMs are ultimately meant to be more than just a tool for Intel, a proper industry standard is still needed – even if that takes a bit longer. Under the hood, MRDIMMs continue to use DDR5 components, form-factor, pinout, SPD, power management ICs (PMICs), and thermal sensors. The major change with the technology is the introduction of multiplexing, which combines multiple data signals over a single channel. The MRDIMM standard also adds RCD/DB logic in a bid to boost performance, increase capacity of memory modules up to 256 GB (for now), shrink latencies, and reduce power consumption of high-end memory subsystems. And, perhaps key to MRDIMM adoption, the standard is being implemented as a backwards-compatible extension to traditional DDR5 RDIMMs, meaning that MRDIMM-capable servers can use either RDIMMs or MRDIMMs, depending on how the operator opts to configure the system. The MRDIMM standard aims to double the peak bandwidth to 12.8 Gbps, increasing pin speed and supporting more than two ranks. Additionally, a "Tall MRDIMM" form factor is in the works (and pictured above), which is designed to allow for higher capacity DIMMs by providing more area for laying down memory chips. Currently, ultra high capacity DIMMs require using expensive, multi-layer DRAM packages that use through-silicon vias (3DS packaging) to attach the individual DRAM dies; a Tall MRDIMM, on the other hand, can just use a larger number of commodity DRAM chips. Overall, the Tall MRDIMM form factor enables twice the number of DRAM single-die packages on the DIMM. Meanwhile, this week's announcement from JEDEC offers the first significant insight into what to expect from LPDDR6 CAMMs. And despite LPDDR5 CAMMs having barely made it out the door, some significant shifts with LPDDR6 itself means that JEDEC will need to make some major changes to the CAMM standard to accommodate the newer memory type. JEDEC Presentation: The CAMM2 Journey and Future Potential Besides the higher memory clockspeeds allowed by LPDDR6 – JEDEC is targeting data transfer rates of 14.4 GT/s and higher – the new memory form-factor will also incorporate an altogether new connector array. This is to accommodate LPDDR6's wider memory bus, which sees the channel width of an individual memory chip grow from 16-bits wide to 24-bits wide. As a result, the current LPCAMM design, which is intended to match the PC standard of a cumulative 128-bit (16x8) design needs to be reconfigured to match LPDDR6's alterations. Ultimately, JEDEC is targeting a 24-bit subhannel/48-bit channel design, which will result in a 192-bit wide LPCAMM. While the LPCAMM connector itself is set to grow from 14 rows of pins to possibly as high as 20. New memory technologies typically require new DIMMs to begin with, so it's important to clarify that this is not unexpected, but at the end of the day it means that the LPCAMM will be undergoing a bigger generational change than what we usually see. JEDEC is not saying at this time when they expect either memory module standard to be completed. But with MRDIMMs already shipping for Intel systems – and similar AMD server parts due a bit later this year – the formal version of that standard should be right around the corner. Meanwhile, LPDDR6 CAMMs will be a bit farther out, particularly as the memory standard itself is still under development.

  • HighPoint Updates NVMe RAID Cards for PCIe 5.0: 50 GBps+ Direct-Attached SSD Storage
    on 23. July 2024 at 12:00

    HighPoint Technologies has updated their NVMe switch and RAID solutions with PCIe 5.0, and supporting up to eight NVMe drives. The new HighPoint Rocket 1600 (switch add-in card) and 7600 series (RAID adapters) are the successors to the SSD SSD7500 series adapter cards introduced in 2020. Similar to its predecessors, the new Rocket series cards are also based on a Broadcom PCIe switch (PEX 89048). The Rocket 7600 series runs the RAID stack on the integrated ARM processor (dual-core Cortex A15) The PEX 89048 supports up to 48 PCIe 5.0 lanes, out of which 16 are dedicated to the host connection in the Rocket adapters. The use of a true PCIe switch means that the product doesn't rely on PCIe lane bifurcation support in the host platform. HighPoint's Gen 5 stack currently has two products each in the switch and RAID lineups - an add-in card with support for M.2 drives, and a RAID adapter with four 5.0 x8 SFF-TA-1016 (Mini Cool Edge IO or MCIO) connectors for use with backplanes / setups involving U.2 / U.3 / EDSFF drives. The RAID adapters require HighPoint's drivers (available for Linux, macOS, and Windows), and supports RAID 0, RAID 1, and RAID 10 arrays. On the other hand, the AIC requires no custom drivers. RAID configurations with the AIC will need to be handled by software running on the host OS. On the hardware side, all members of the Rocket series come with an external power connector (as the solution can consume upwards of 75W) and integrate a heatsink. The M.2 version is actively cooled, as the drives are housed within the full-height / full-length cards. The solution can theoretically support up to 64 GBps of throughput, but real-world performance is limited to around 56 GBps using Gen 5 drives. It must be noted that even Gen 4 drives can take advantage of the new platform and deliver better performance with the new Rocket series compared to the older SSD7500 series. The cards are shipping now, with pricing ranging from $1500 (add-in card) to $2000 (RAID adapters). HighPoint is not alone in targeting this HEDT / workstation market. Sabrent has been teasing their Apex Gen 5.0 x16 solution involving eight M.2 SSDs for a few months now (involving a Microchip PCIe switch. Until that solution comes to the market, HighPoint appears to be the only game in town for workstation users requiring access to direct-attached storage capable of delivering 50 GBps+ speeds.

  • Intel Addresses Desktop Raptor Lake Instability Issues: Faults Excessive Voltage from Microcode,...
    on 22. July 2024 at 23:00

    In what started last year as a handful of reports about instability with Intel's Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel's latest saga is about to reach its end, as today the company has announced that they've found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. Officially, Intel has been working to identify the cause of desktop Raptor Lake’s instability issues since at least February of this year, if not sooner. In the interim they have discovered a couple of correlating factors – telling motherboard vendors to stop using ridiculous power settings for their out-of-the-box configurations, and finding a voltage-related bug in Enhanced Thermal Velocity Boost (eTVB) – but neither factor was the smoking gun that set all of this into motion. All of which had left Intel to continue searching for the root cause in private, and lots of awkward silence to fill the gaps in the public. But it looks like Intel’s search has finally come to an end – even if Intel isn’t putting the smoking gun on public display quite yet. According to a fresh update posted to the company’s community website, Intel has determined the root cause at last, and has a fix in the works. Per the company’s announcement, Intel has tracked down the cause of the instability issue to “elevated operating voltages”, that at its heart, stems from a flawed algorithm in Intel’s microcode that requested the wrong voltage. Consequently, Intel will be able to resolve the issue through a new microcode update, which pending validation, is expected to be released in the middle of August. Based on extensive analysis of Intel Core 13th/14th Gen desktop processors returned to us due to instability issues, we have determined that elevated operating voltage is causing instability issues in some 13th/14th Gen desktop processors. Our analysis of returned processors confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor. Intel is delivering a microcode patch which addresses the root cause of exposure to elevated voltages. We are continuing validation to ensure that scenarios of instability reported to Intel regarding its Core 13th/14th Gen desktop processors are addressed. Intel is currently targeting mid-August for patch release to partners following full validation. Intel is committed to making this right with our customers, and we continue asking any customers currently experiencing instability issues on their Intel Core 13th/14th Gen desktop processors reach out to Intel Customer Support for further assistance. -Intel Community Post And while there’s nothing good for Intel about Raptor Lake’s instability issues or the need to fix them, that the problem can be ascribed to (or at least fixed by) microcode is about the best possible outcome the company could hope for. Across the full spectrum of potential causes, microcode is the easiest to fix at scale – microcode updates are already distributed through OS updates, and all chips of a given stepping (millions in all) run the same microcode. Even a motherboard BIOS-related issue would be much harder to fix given the vast number of different boards out there, never mind a true hardware flaw that would require Intel to replace even more chips than they already have. Still, we’d also be remiss if we didn’t note that microcode is regularly used to paper over issues further down in the processor, as we’ve most famously seen with the Meltdown/Spectre fixes several years ago. So while Intel is publicly attributing the issue to microcode bugs, there are several more layers to the onion that is modern CPUs that could be playing a part. In that respect, a microcode fix grants the least amount of insight into the bug and the performance implications about its fix, since microcode can be used to mitigate so many different issues. But for now, Intel’s focus is on communicating that they have fix and establishing a timeline for distributing it. The matter has certainly caused them a lot of consternation over the last year, and it will continue to do so for at least another month. In the meantime, we’ve reached out to our Intel contacts to see if the company will be publishing additional details about the voltage bug and its fix. “Elevated operating voltages” is not a very satisfying answer on its own, and given the unprecedented nature of the issue, we’re hoping that Intel will be able to share additional details as to what’s going on, and how Intel will be preventing it in the future. Intel Also Confirms a Via Oxidation Manufacturing Issue Affected Early Raptor Lake Chips Tangential to this news, Intel has also made a couple of other statements regarding chip instability to the press and public over the last 48 hours that also warrant some attention. First and foremost, leading up to Intel’s official root cause analysis of the desktop Raptor Lake instability issues, one possibility that couldn’t be written off at the time was that the root cause of the issue was a hardware flaw of some kind. And while the answer to that turned out to be “no,” there is a rather important “but” in there, as well. As it turns out, Intel did have an early manufacturing flaw in the enhanced version of the Intel 7 process node used to build Raptor Lake. According to a post made by Intel to Reddit this afternoon, a “via Oxidation manufacturing issue” was addressed in 2023. However, despite the suspicious timing, according to Intel this is separate from the microcode issue driving instability issues with Raptor Lake desktop processors up to today. Short answer: We can confirm there was a via Oxidation manufacturing issue (addressed back in 2023) but it is not related to the instability issue. Long answer: We can confirm that the via Oxidation manufacturing issue affected some early Intel Core 13th Gen desktop processors. However, the issue was root caused and addressed with manufacturing improvements and screens in 2023. We have also looked at it from the instability reports on Intel Core 13th Gen desktop processors and the analysis to-date has determined that only a small number of instability reports can be connected to the manufacturing issue. For the Instability issue, we are delivering a microcode patch which addresses exposure to elevated voltages which is a key element of the Instability issue. We are currently validating the microcode patch to ensure the instability issues for 13th/14th Gen are addressed. -Intel Reddit Post Ultimately, Intel says that they caught the issue early-on, and that only a small number of Raptor Lake were affected by the via oxidation manufacturing flaw. Which is hardly going to come as a comfort to Raptor Lake owners who are already worried about the instability issue, but if nothing else, it’s helpful that the issue is being publicly documented. Typically, these sorts of early teething issues go unmentioned, as even in the best of scenarios, some chips inevitably fail prematurely. Unfortunately, Intel’s revelation here doesn’t offer any further details on what the issue is, or how it manifests itself beyond further instability. Though at the end of the day, as with the microcode voltage issue, the fix for any affected chips will be to RMA them with Intel to get a replacement. Laptops Not Affected by Raptor Lake Microcode Issue Finally, ahead of the previous two statements, Intel also released a statement to Digital Trends and a few other tech websites over the weekend, in response to accusations that Intel’s 13th generation Core mobile CPUs were also impacted by what we now know to be the microcode flaw. In the statement, Intel refuted those claims, stating that laptop chips were not suffering from the same instability issue. Intel is aware of a small number of instability reports on Intel Core 13th/14th Gen mobile processors. Based on our in-depth analysis of the reported Intel Core 13th/14th Gen desktop processor instability issues, Intel has determined that mobile products are not exposed to the same issue. The symptoms being reported on 13th/14th Gen mobile systems – including system hangs and crashes – are common symptoms stemming from a broad range of potential software and hardware issues. As always, if users are experiencing issues with their Intel-powered laptops we encourage them to reach out to the system manufacturer for further assistance. -Intel Rep to Digital Trends Instead, Intel attributed any laptop instability issues to typical hardware and software issues – essentially claiming that they weren’t experiencing elevated instability issues. Whether this statement accounts for the via oxidation manufacturing issue is unclear (in large part because not all 13th Gen Core Mobile parts are Raptor Lake), but this is consistent with Intel’s statements from earlier this year, which have always explicitly cited the instability issues as desktop issues.

  • Tenstorrent Launches Wormhole AI Processors: 466 FP8 TFLOPS at 300W
    on 19. July 2024 at 18:30

    Tenstorrent has unveiled its next-generation Wormhole processor for AI workloads that promises to offer decent performance at a low price. The company currently offers two add-on PCIe cards carrying one or two Wormhole processors as well as TT-LoudBox, and TT-QuietBox workstations aimed at software developers. The whole of today's release is aimed at developers rather than those who will deploy the Wormhole boards for their commercial workloads. “It is always rewarding to get more of our products into developer hands. Releasing development systems with our Wormhole™ card helps developers scale up and work on multi-chip AI software.” said Jim Keller, CEO of Tenstorrent. “In addition to this launch, we are excited that the tape-out and power-on for our second generation, Blackhole, is going very well.” Each Wormhole processor packs 72 Tensix cores (featuring five RISC-V cores supporting various data formats) with 108 MB of SRAM to deliver 262 FP8 TFLOPS at 1 GHz at 160W thermal design power. A single-chip Wormhole n150 card carries 12 GB of GDDR6 memory featuring a 288 GB/s bandwidth. Wormhole processors offer flexible scalability to meet the varying needs of workloads. In a standard workstation setup with four Wormhole n300 cards, the processors can merge to function as a single unit, appearing as a unified, extensive network of Tensix cores to the software. This configuration allows the accelerators to either work on the same workload, be divided among four developers or run up to eight distinct AI models simultaneously. A crucial feature of this scalability is that it operates natively without the need for virtualization. In data center environments, Wormhole processors will scale both inside one machine using PCIe or outside of a single machine using Ethernet.  From performance standpoint, Tenstorrent's single-chip Wormhole n150 card (72 Tensix cores at 1 GHz, 108 MB SRAM, 12 GB GDDR6 at 288 GB/s) is capable of 262 FP8 TFLOPS at 160W, whereas the dual-chip Wormhole n300 board (128 Tensix cores at 1 GHz, 192 MB SRAM, aggregated 24 GB GDDR6 at 576 GB/s) can offer up to 466 FP8 TFLOPS at 300W (according to Tom's Hardware). To put that 466 FP8 TFLOPS at 300W number into context, let's compare it to what AI market leader Nvidia has to offer at this thermal design power. Nvidia's A100 does not support FP8, but it does support INT8 and its peak performance is 624 TOPS (1,248 TOPS with sparsity). By contrast, Nvidia's H100 supports FP8 and its peak performance is massive 1,670 TFLOPS (3,341 TFLOPS with sparsity) at 300W, which is a big difference from Tenstorrent's Wormhole n300.  There is a big catch though. Tenstorrent's Wormhole n150 is offered for $999, whereas n300 is available for $1,399. By contrast, one Nvidia H100 card can retail for $30,000, depending on quantities. Of course, we do not know whether four or eight Wormhole processors can indeed deliver the performance of a single H300, though they will do so at 600W or 1200W TDP, respectively. In addition to cards, Tenstorrent offers developers pre-built workstations with four n300 cards inside the less expensive Xeon-based TT-LoudBox with active cooling and a premium EPYC-powered TT-QuietBox with liquid cooling. Sources: Tenstorrent, Tom's Hardware

  • TSMCs Q2'24 Results: Best Quarter Ever as HPC Revenue Share Exceeds 52% on AI Demand
    on 18. July 2024 at 17:00

    Taiwan Semiconductor Manufacturing Co. this week said its revenue for the second quarter 2024 reached $20.82 billion, making it the company's best quarter (at least in dollars) to date. TSMC's high-performance computing (HPC) platform revenue share exceeded 52% for the first time in many years due to demand for AI processors and rebound of the PC market. TSMC earned $20.82 billion USD in revenue for the second quarter of 2024, a 32.8% year-over-year increase and a 10.3% increase from the previous quarter. Perhaps more remarkable, $20.82 billion is a higher result than the company posted Q3 2022 ($20.23 billion), the foundry's best quarter to date. Otherwise, in terms of profitability, TSMC booked $7.59 billion in net income for the quarter, for a gross margin of 53.2%. This is a decent bit off of TSMC's record margin of 60.4% (Q3'22), and comes as the company is still in the process of further ramping its N3 (3nm-class) fab lines. When it comes to wafer revenue share, the company's N3 process technologies (3nm-class) accounted for 15% of wafer revenue in Q2 (up from 9% in the previous quarter), N5 production nodes (4nm and 5nm-classes) commanded 35% of TSMC's earnings in the second quarter (down from 37% in Q1 2024), and N7 fabrication processes (6nm and 7nm-classes) accounted for 17% of the foundry's wafer revenue in the second quarter of 2024 (down from 19% in Q1 2024). Advanced technologies all together (N3, N5, N7) accounted for 67% of total wafer revenue. "Our business in the second quarter was supported by strong demand for our industry-leading 3nm and 5nm technologies, partially offset by continued smartphone seasonality," said Wendell Huang, Senior VP and Chief Financial Officer of TSMC. "Moving into third quarter 2024, we expect our business to be supported by strong smartphone and AI-related demand for our leading-edge process technologies." TSMC usually starts ramping up production for Apple's fall products (e.g. iPhone) in the second quarter of the year, so it is not surprising that revenue share of N3 increased in Q2 of this year. Yet, keeping in mind that TSMC's revenue in general increased by 10.3% QoQ, the company's shipments of processors made on N5 and N7 nodes are showing resilience as demand for AI and HPC processors is high across the industry. Speaking of TSMC's HPC sales, HPC platform sales accounted for 52% of TSMC's revenue for the first time in many years. The world's largest contract maker of chips produces many types of chips that get placed under the HPC umbrella, including AI processors, CPUs for client PCs, and system-on-chips (SoCs) for consoles, just to name a few. Yet, in this case TSMC attributes demand for AI processors as the main driver for its HPC success.  As for smartphone platform revenue, its share dropped to 33% as actual sales declined by 1% quarter-over-quarter. All other segments grew by 5% to 20%. For the third quarter of 2024, TSMC expects revenue between US$22.4 billion and US$23.2 billion, with a gross profit margin of 53.5% to 55.5% and an operating profit margin of 42.5% to 44.5%. The company's sales are projected to be driven by strong demand for leading-edge process technologies as well as increased demand for AI and smartphones-related applications.

  • The Corsair RM750e ATX 3.1 Review: Simple And Effective
    on 18. July 2024 at 13:00

    As mainstream power supplies continue to make their subtle shift to the ATX 3.1 standard, the pace of change is picking up. Already most vendors offer at least one ATX 3.1 unit in their lineups, and thanks to the relatively small set of changes that come with the revised standard, PSU vendors have largely been able to tweak their existing ATX 3.0 designs, allowing for them to quickly roll-out updated power supplies. This means that the inflection point for ATX 3.1 as a whole is quickly approaching, as more and more designs get their update and make their way out to retail shelves. Today we're looking at our first ATX 3.1-compliant PSU from Corsair, one of the industry's most prolific (and highest profile) power supply vendors. Their revised RMe line of power supplies are aimed at the mainstream gaming market, which is perhaps not too surprising given how important ATX 3.1 support and safety are to video cards. The RM750e model we're looking at today is the smallest capacity for the lineup, which stretches from 750 Watts up to a hefty 1200 Watts. Overall, the RM750e is built to meet the demands of contemporary gaming systems, and boasts a great balance between features, performance, and cost. It is an 80Plus Gold certified unit with modular cables and PCIe 5.1/ATX 3.1 certified, offering a single 600W 12V-2x6 connector. We will explore its specifications, construction, and performance to determine its standing in today’s market.

  • Best CPUs for Gaming: July 2024
    on 17. July 2024 at 20:00

    As the second quarter of 2024 is soon set to unfold, there are many things to be excited about, especially as Computex 2024 has been and gone. We now know that AMD's upcoming Ryzen 9000 series desktop processors using the new Zen 5 cores will be hitting shelves at the end of the month (31st July), and on top of this, AMD also recently slashed pricing on their Zen 4 (Ryzen 8000) processors. Intel still needs to follow suit with their 14th or 13th Gen Core series processors, but right now from a cost standpoint, AMD is in a much better position. Since the publication of our last guide, the only notable CPU to be launched was Intel's special binned Core i9-14900KS, which not only pushes clock speeds up to 6.2 GHz but is the last processor to feature Intel's iconic Core I series nomenclature. The other big news in the CPU world was from Intel, with a statement issued about pushing users to use the Intel Default Specification on Intel's 14th and 13th Gen processors, which ultimately limits the performance compared to published data. We're still in the process of  While the CPU market has been relatively quiet so far this year, and things are set to pick up once AMD's Zen 5 and Intel's Arrow Lake desktop chips are all launched onto the market, it means today we are working for the same hymn sheet as our previous guide. With AMD's price drops on Ryzen 7000 series processors, much of the guide reflects this as AMD and Intel's performance is neck and neck in many use cases, but cost certainly plays a big factor in selecting a new CPU. As we move into the rest of 2024, the CPU market looks set to see the rise of the 'AI PC,' which is looking set to be something that many companies will focus on by the end of 2024, both on mobile and desktop platforms.

  • Crucial P310 NVMe SSD Unveiled: Micron's Play in the M.2 2230 Market
    on 17. July 2024 at 13:00

    Hand-held gaming consoles based on notebook platforms (such as the Valve SteamDeck, ASUS ROG Ally, and the MSI Claw) are one of the fastest growing segments in the PC gaming market. The form-factor of such systems has created a demand for M.2 2230 NVMe SSDs. Almost all vendors have a play in this market, and even Micron has OEM SSDs (such as the Micron 2400, 2550, and 2500 series) in this form-factor. Crucial has strangely not had an offering with its own brand name to target this segment, but that changes today with the launch of the Crucial P310 NVMe SSD. The Crucial P310 is a family of M.2 2230 PCIe Gen4 NVMe SSDs boasting class-leading read/write speeds of 7.1 GBps and 6 GBps. The family currently has two capacity points - 1 TB and 2 TB. Micron claims that the use of its 232L 3D NAND and Phison's latest E27T DRAM-less controller (fabricated in TSMC's 12nm process) help in reducing power consumption under active use compared to the competition - directly translating to better battery life for the primary use-case involving gaming handheld consoles. Based on the specifications, it appears that the drives are using 232L 3D QLC. Compared to the recently-released Micron 2550 SSD series in the same form-factor, a swap in the controller has enabled some improvements in both power efficiency and performance. The other specifications are summarized in the table below. Crucial P310 SSD Specifications Capacity 2 TB 1 TB Controller Phison E27T (DRAM-less) NAND Flash Micron 232L 3D QLC NAND Form-Factor, Interface Single-Sided M.2-2230 PCIe 4.0 x4, NVMe Sequential Read 7100 MB/s Sequential Write 6000 MB/s Random Read IOPS 1 M Random Write IOPS 1.2 M SLC Caching Yes TCG Pyrite Encryption Yes Warranty 5 Years Write Endurance 440 TBW 0.12 DWPD 220 TBW 0.12 DWPD MSRP $215 $115 The power efficiency, cost, and capacity points are plus points for the Crucial P310 family. However, the endurance ratings are quite low. Gaming workloads are inherently read-heavy, and this may not be a concern for the average consumer. However, a 0.12 DWPD rating may turn out to be a negative aspect when compared against the competition's 0.33 DWPD offerings in the same segment.

  • Samsung Validates LPDDR5X Running at 10.7 GT/sec with MediaTek's Dimensity 9400 SoC
    on 17. July 2024 at 12:00

    Samsung has successfully validated its new LPDDR5X-10700 memory with MediaTek's upcoming Dimensity platform. At present, 10.7 GT/s is the highest performing speed grade of LPDDR5X DRAM slated to be released this year, so the upcoming Dimensity 9400 system-on-chip will get the highest memory bandwidth available for a mobile application processor. The verification process involved Samsung's 16 GB LPDDR5X package and MediaTek's soon-to-be-announced Dimensity 9400 SoC for high-end 5G smartphones. Usage of LPDDR5X-10700 provides a memory bandwidth of 85.6 GB/second over a 64-bit interface, which will be available for bandwidth-hungry applications like graphics and generative AI. "Working together with Samsung Electronics has made it possible for MediaTek's next-generation Dimensity chipset to become the world's first to be validated at LPDDR5X operating speeds up to 10.7Gbps, enabling upcoming devices to deliver AI functionality and mobile performance at a level we have never seen before," said JC Hsu, Corporate Senior Vice President at MediaTek. "This updated architecture will make it easier for developers and users to leverage more AI capabilities and take advantage of more features with less impact on battery life." Samsung's LPDDR5X 10.7 GT/s memory in made on the company's 12nm-class DRAM process technology and is said to provide a more than 25% improvement in power efficiency over previous-generation LPDDR5X, in addition to extra performance. This will positively affect improved user experience, including enhanced on-device AI capabilities, such as faster voice-to-text conversion, and better quality graphics. Overall, the two companies completed this process in just three months. Though it remains to be seen when smartphones based on the Dimensity 9400 application processor and LPDDR5X memory are set to be available on the market, as MediaTek has not yet even formally announced the SoC itself. "Through our strategic cooperation with MediaTek, Samsung has verified the industry's fastest LPDDR5X DRAM that is poised to lead the AI smartphone market," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "Samsung will continue to innovate through active collaboration with customers and provide optimum solutions for the on-device AI era."

  • Western Digital Adds 8TB Model to Popular High-End SN850X SSD Drive Family
    on 16. July 2024 at 21:30

    Western Digital has quietly introduced an 8 TB version of its high-end SN850X SSD, doubling the top capacity of the well-regarded drive family. The new drive offers performance on par with other members of the range, but with twice as much capacity as the previous top-end model – and with a sizable price premium to go with its newfound capacity. Western Digital introduced its WD_Black SN850X SSDs in the summer of 2022, releasing single-sided 1 TB and 2 TB models, along with a double-sided 4 TB model. But now almost two years down the line, the company has seen it fit to introduce the even higher capacity 8 TB model to serve as their flagship PCIe 4.0 SSD, and keep with the times of NAND prices and SSD capacity demands. Like the other SN850X models, WD is using their in-house, 8-channel controller for the new 8 TB model, which sports a PCIe 4.0 x4 interface. And being that this is a high-end SSD, the controller is paired with DRAM (DDR4) for page index caching, though WD doesn't disclose how much DRAM is on any given model. On the NAND front, WD is apparently still using their BiCS 5 112L NAND here, which means we're looking at 4x 2 TB NAND chips, each with 16 1Tbit TLC dies on-board, twice as many dies as were used on the NAND chips for the 4 TB model. The peak read speed of the new 8TB model is 7,200 MB/sec, which is actually a smidge below the performance the 4 TB and 2 TB models due to the overhead from the additional NAND dies. Meanwhile peak sequential write speeds remain at 6,600 MB/sec, while 4K random write performance maxes out at 1200K IOPS for both reads and writes. It goes without saying that this is a step below the performance of the market flagship PCIe 5.0 SSDs available today, but it's going to be a bit longer until anyone else besides Phison is shipping a PCIe 5.0 controller – never mind the fact that these drives aren't available in 8 TB capacities. The 8 TB SN850X also keeps the same drive endurance progression as the rest of the SN850X family. In this case, double the NAND brings double the endurance of the 4 TB model, for an overall endurance of 4800 terabytes written (TBW). Or in terms of drive writes per day, this is the same 0.33 rating as the other SN850X drives. WD_Back SN850X SSD Specifications Capacity 8 TB 4 TB 2 TB 1 TB Controller WD In-House: 8 Channel, DRAM (DDR4) NAND Flash WD BiCS 5 TLC Form-Factor, Interface Double-Sided M.2-2280 PCIe 4.0 x4, NVMe Single-Sided M.2-2280 PCIe 4.0 x4, NVMe Sequential Read 7200 MB/s 7300 MB/s 7300 MB/s 7300 MB/s Sequential Write 6600 MB/s 6600 MB/s 6600 MB/s 6300 MB/s Random Read IOPS 1200K 1200K 1200K 800K Random Write IOPS 1200K 1100K 1100K 1100K SLC Caching Yes TCG Opal Encryption 2.01 Warranty 5 Years Write Endurance 4800 TBW 0.33 DWPD 2400 TBW 0.33 DWPD 1200 TBW 0.33 DWPD 600 TBW 0.33 DWPD MSRP (No Heatsink) $850 $260 $140 $85 Western Digital's WD_Black SN850X is available both with and without aluminum heatsink. The version without a heatsink aimed at laptops and BYOC setups costs $849.99, whereas a version with an aluminum heat spreader comes at $899.99. In both cases the 8 TB drive carries a significant price premium over the existing 4 TB model, which is readily available for $259.99. This kind of price premium is unfortunately typical for 8 TB drives, and will likely remain so until both supply and demand for the high-capacity drives picks up to bring prices down. Still, with rival drives such as Corsair's MP600 Pro XT 8 TB and Sabrent's Rocket 4 Plus 8 TB going for $965.99 and $1,199.90 respectively, the introduction of the 8 TB SN850X is definitely pushing high-capacity M.2 SSD prices down, albeit slowly. So for systems with multiple M.2 slots, at least, the sweet spot on drive pricing is still to get two 4 TB SSDs.

  • Micron Expands Datacenter DRAM Portfolio with MR-DIMMs
    on 16. July 2024 at 13:10

    The compute market has always been hungry for memory bandwidth, particularly for high-performance applications in servers and datacenters. In recent years, the explosion in core counts per socket has further accentuated this need. Despite progress in DDR speeds, the available bandwidth per core has unfortunately not seen a corresponding scaling. The stakeholders in the industry have been attempting to address this by building additional technology on top of existing widely-adopted memory standards. With DDR5, there are currently two technologies attempting to increase the peak bandwidth beyond the official speeds. In late 2022, SK hynix introduced MCR-DIMMs meant for operating with specific Intel server platforms. On the other hand, JEDEC - the standards-setting body - also developed specifications for MR-DIMMs with a similar approach. Both of them build upon existing DDR5 technologies by attempting to combine multiple ranks to improve peak bandwidth and latency. How MR-DIMMs Work The MR-DIMM standard is conceptually simple - there are multiple ranks of memory modules operating at standard DDR5 speeds with a data buffer in front. The buffer operates at 2x the speed on the host interface side, allowing for essentially double the transfer rates. The challenges obviously lie in being able to operate the logic in the host memory controller at the higher speed and keeping the power consumption / thermals in check. The first version of the JEDEC MR-DIMM standard specifies speeds of 8800 MT/s, with the next generation at 12800 MT/s. JEDEC also has a clear roadmap for this technology, keeping it in sync with the the improvements in the DDR5 standard. Micron MR-DIMMs - Bandwidth and Capacity Plays Micron and Intel have been working closely in the last few quarters to bring their former's first-generation MR-DIMM lineup to the market. Intel's Xeon 6 Family with P-Cores (Granite Rapids) is the first platform to bring MR-DIMM support at 8800 MT/s on the host side. Micron's standard-sized MR-DIMMs (suitable for 1U servers) and TFF (tall form-factor) MR-DIMMs (for 2U+ servers) have been qualified for use with the same. The benefits offered by MR-DIMMs are evident from the JEDEC specifications, allowing for increased data rates and system bandwidth, with improvements in latency. On the capacity side, allowing for additional ranks on the modules has enabled Micron to offer a 256 GB capacity point. It must be noted that some vendors are also using TSV (through-silicon vias) technology to to increase the per-package capacity at standard DDR5 speeds, but this adds additional cost and complexity that are largely absent in the MR-DIMM manufacturing process. The tall form-factor (TFF) MR-DIMMs have a larger surface area compared to the standard-sized ones. For the same airflow configuration, this allows the DIMM to have a better thermal profile. This provides benefits for energy efficiency as well by reducing the possibility of thermal throttling. Micron is launching a comprehensive lineup of MR-DIMMs in both standard and tall form-factors today, with multiple DRAM densities and speed options as noted above. MRDIMM Benefits - Intel Granite Rapids Gets a Performance Boost Micron and Intel hosted a media / analyst briefing recently to demonstrate the benefits of MR-DIMMs for Xeon 6 with P-Cores (Granite Rapids). Using a 2P configuration with 96-core Xeon 6 processors, benchmarks for different workloads were processed with both 8800 MT/s MR-DIMMs and 6400 MT/s RDIMMs. The chosen workloads are particularly notorious for being limited in performance by memory bandwidth. OpenFOAM is a widely-used CFD workload that benefits from MR-DIMMs. For the same memory capacity, the 8800 MT/s MR-DIMM shows a 1.31x speedup based on higher average bandwidth and IPC improvements, along with lower last-level cache miss latency. The performance benefits are particularly evident with more cores participating the workload. Apache Spark is a commonly used big-data platform operating on large datasets. Depending on the exact dataset in the picture, the performance benefits of MR-DIMMs can vary. Micron and Intel used a 2.4TB set from Intel's Hibench benchmark suite for this benchmark, showing a 1.2x speedup at the same capacity and 1.7x speedup with doubled-capacity TFF MR-DIMMs. Avoiding the need to push data back to the permanent storage also contributes to the speedup. The higher speed offered by MR-DIMMs also helps in AI inferencing workloads, with Micron and Intel showing a 1.31x inference performance improvement along with reduced time to first token for a Llama 3 8B parameter model. Obviously, purpose-built inferencing solutions based on accelerators will perform better. However, this was offered as a demonstration of the type of CPU workloads that can benefit from MR-DIMMs. As the adage goes, there is no free lunch. At 8800 MT/s, MR-DIMMs are definitely going to guzzle more power compared to 6400 MT/s RDIMMs. However, the faster completion of workloads mean that the the energy consumption for a given workload will be lower for the MR-DIMM configurations. We would have liked Micron and Intel to quantify this aspect for the benchmarks presented in the demonstration. Additionally, Micron indicated that the energy efficiency (in terms of pico-joules per bit transferred) is largely similar for both the 6400 MT/s RDIMMs and 8800 MT/s MR-DIMMs. Key Takeaways The standardization of MR-DIMMs by JEDEC allows multiple industry stakeholders to participate in the market. Customers are not vendor-locked and can compare and contrast options from different vendors to choose the best fit for their needs. At Computex, we saw MR-DIMMs from ADATA on display. As a Tier-2 vendor without its own DRAM fab, ADATA's play is on cost benefits with the possibility of the DRAM die being sourced from different fabs. The MR-DIMM board layout is dictated by JEDEC specifications, and this allows Tier-2 vendors to have their own play with pricing flexibility. Modules are also built based on customer orders. Micron, on the other hand, has a more comprehensive portfolio / lineup of SKUs for different use-cases with the pros and cons of vertical integration in the picture. Micron is also not the first to publicly announce MR-DIMM sampling. Samsung announced their own lineup (based on 16Gb DRAM dies) last month. It must be noted that Micron's MR-DIMM portfolio uses 16 Gb, 24 Gb, and 32 Gb dies fabricated in 1β technology. While Samsung's process for the 16 Gb dies used in their MR-DIMMs is not known, Micron believes that their MR-DIMM technology will provide better power efficiency compared to the competition while also offering customers a wider range of capacities and configurations.

  • The AMD Zen 5 Microarchitecture: Powering Ryzen AI 300 Series For Mobile and Ryzen 9000 for Desktop
    on 15. July 2024 at 13:00

    Back at Computex 2024, AMD unveiled their highly anticipated Zen 5 CPU microarchitecture during AMD CEO Dr. Lisa Su's opening keynote. AMD announced not one but two new client platforms that will utilize the latest Zen 5 cores. This includes AMD's latest AI PC-focused chip family for the laptop market, the Ryzen AI 300 series. In comparison, the Ryzen 9000 series caters to the desktop market, which uses the preexisting AM5 platform. Built around the new Zen 5 CPU microarchitecture with some fundamental improvements to both graphics and AI performance, the Ryzen AI 300 series, code-named Strix Point, is set to deliver improvements in several areas. The Ryzen AI 300 series looks set to add another footnote in the march towards the AI PC with its mobile SoC featuring a new XDNA 2 NPU, from which AMD promises 50 TOPS of performance. AMD has also upgraded the integrated graphics with the RDNA 3.5, which is designed to replace the last generation of RDNA 3 mobile graphics, for better performance in games than we've seen before. Further to this, during AMD's recent Tech Day last week, AMD disclosed some of the technical details regarding Zen 5, which also covers a number of key elements under the hood on both the Ryzen AI 300 and the Ryzen 9000 series. On paper, the Zen 5 architecture looks quite a big step up compared to Zen 4, with the key component driving Zen 5 forward through higher instructions per cycle than its predecessor, which is something AMD has managed to do consistently from Zen to Zen 2, Zen 3, Zen 4, and now Zen 5.

  • Troubled AI Processor Developer Graphcore Finds a Buyer: SoftBank
    on 12. July 2024 at 20:30

    After months of searching for a buyer, troubled U.K.-based AI processor designer Graphcore said on Friday that it has been acquired by SoftBank. The company will operate as a wholly owned subsidiary of SoftBank and will possibly collaborate with Arm, but what remains to be seen what happens to the unique architecture of Graphcore's intelligence processing units (IPUs). Graphcore will retain its name as it will become a wholly owned subsidiary of SoftBank, which paid either $400 million (according to EE Times) or $500 million (according to BBC) for the company. Over its lifetime, Graphcore has received a total of $700 million of investments from Microsoft and Sequoia Capital, and at its peak in late 2020, was valued at $2.8 billion. Nigel Toon will remain at the helm of Graphcore, which will hire new staff in its UK offices and continue to be headquartered in Bristol, with additional offices in Cambridge, London, Gdansk (Poland), and Hsinchu (China). "This is a tremendous endorsement of our team and their ability to build truly transformative AI technologies at scale, as well as a great outcome for our company," said Nigel Toon. "Demand for AI compute is vast and continues to grow. There remains much to do to improve efficiency, resilience, and computational power to unlock the full potential of AI. In SoftBank, we have a partner that can enable the Graphcore team to redefine the landscape for AI technology." Although Graphcore says that it had won contracts with major high-tech companies and deployed its IPUs, it could not compete against NVIDIA and other prêt-à-porter AI processor vendors due to insufficient funding. In the recent years the company's problems were so severe that it had to lay off 20% of its staff, bringing its headcount to around 500. Those cuts also saw office closures in Norway, Japan, and South Korea, which made it even harder to compete against big players. Graphcore certainly hopes that with SoftBank's deep pockets and willingness to invest in AI technologies in general and AI processors in particular, it will finally be able to compete head-to-head with established players like NVIDIA. When asked whether Graphcore will work with SoftBank's Arm, Nigel Toon said that he was looking forward to work with all companies controlled by its parent, including Arm. Meanwhile, SoftBank itself is reportedly looking forward to build its own AI processor venture called Project Izanagi to compete against NVIDIA, whereas Arm is reportedly developing AI processors that will work in datacenters owned by SoftBank. Therefore, it remains to be seen where does Graphcore fit in. For now, the best processor that Graphcore has is its Colossus MK2 IPU, which is built using 59.4 billion transistors and packs in 1,472 independent cores with simultaneous multithreading (SMT) capable of handling 8,832 parallel threads. Instead of using HBM or other types of external memory, the chip integrates 900 MB of SRAM, providing an aggregated bandwidth of 47.5 TB/s per chip. Additionally, it features 10 IPU links to scale with other MK2 processors. When it comes to performance, the MK2 C600 delivers 560 TFLOPS FP8, 280 TFLOPS FP16, and 70 TFLOPS of FP32 performance at 185W. To put the numbers into context, NVIDIA's A100 delivers 312 FP16 TFLOPS without sparsity as well as 19.5 FP32 TFLOPS, whereas NVIDIA's H100 card offers 3,341 FP8 TFLOPS. Sources: Graphcore, EE Times, BBC, Reuters

  • Applied Materials' New Deposition Tool Enables Copper Wires to Be Used for 2nm and Beyond
    on 12. July 2024 at 12:00

    Although the pace of Moore's Law has undeniably slackened in the last decade, transistor density is still increasing with every new process technology. But there is a challenge with feeding power to smaller transistors, as with the smaller transistors comes thinner power wires within the chip, which increases their resistance and may cause yield loss. Looking to combat that effect, this week Applied Materials introduced its new Applied Endura Copper Barrier Seed IMS with Volta Ruthenium Copper Vapor Deposition (CVD) tool, which enables chipmakers to keep using copper for wiring with 2 nm-class and more advanced process technologies. Today's advanced logic processors have about 20 layers of metal, with thin signal wires and thicker power wires. Scaling down wiring with shrinking transistors presents numerous challenges. Thinner wires have higher electrical resistance, while closer wires heighten capacitance and electrical crosstalk. The combination of the two can lead to increased power consumption while also limiting performance scaling, which is particularly problematic for datacenter grade processors that are looking to have it all. Moving power rails to a wafer's back-side is expected to enhance performance and efficiency by reducing wiring complexity and freeing up space for more transistors.  But backside power delivery network (BSPDN) does not solve the problem with thin wires in general. As lithographic scaling progresses, both transistor features and wiring trenches become smaller. This reduction means that barriers and liners take up more space in these trenches, leaving insufficient room to deposit copper without creating voids, which raises resistance and can lower yields. Additionally, the closer proximity of wires thins the low-k dielectrics, making them more vulnerable to damage during the etching process. This damage increases capacitance and weakens the chips, making them unsuitable for 3D stacking. Consequently, as the industry advances, copper wiring faces significant physical scaling challenges. But Applied Materials has a solution. Adopting Binary RuCo Liners Contemporary manufacturing technologies use reflow to fill interconnects with copper, where anneals help the copper flow from the wafer surface into wiring trenches and vias. This process depends on the liners on which the copper flows. Normally, a CVD cobalt film was used for liners, but this film is too thick for 3nm-class nodes (which would affect resistance and yield). Applied Materials proposes using a ruthenium cobalt (RuCo) binary liner with a thickness under 20A (2nm, 20 angstroms), which would provide better surface properties for copper reflow. This would ultimately allow for 33% more space for void-free conductive copper to be reflowed, reducing the overall resistance by 25%. While usage of the new liner requires new tooling, it can enable better interconnects that mean higher performance, lower power consumption and higher yields. Gallery: Applied Materials New Tool Enables Copper Wires to Be Used for 2nm and Beyond Applied Materials says that so far its new Endura Copper Barrier Seed IMS with Volta Ruthenium CVD tool has been adopted by all leading logic makers, including TSMC and Samsung Foundry for their 3nm-class nodes and beyond. "The semiconductor industry must deliver dramatic improvements in energy-efficient performance to enable sustainable growth in AI computing," said Dr. Y.J. Mii, Executive Vice President and Co-Chief Operating Officer at TSMC. "New materials that reduce interconnect resistance will play an important role in the semiconductor industry, alongside other innovations to improve overall system performance and power." New Low-K Dielectric But thin and efficient liner is not the only thing crucial for wiring at 3nm production nodes and beyond. Trenches for wiring are filed not only with a Co/RuCo liner and a Ta/N barrier, but with low dielectric constant (Low-K) film to minimize electrical charge buildup, reduce power consumption, and lower signal interference. Applied Materials has offered its Black Diamond Low-K film since the early 2000s.  But new production nodes require better dielectrics, so this week the company introduced an upgraded version of Black Diamond material and a plasma-enhanced chemical vapor deposition (PEVCD) tool to apply it, the Producer Black Diamond PECVD series. This new material allows for scaling down to 2nm and beyond by further reducing the dielectric constant while also increasing the mechanical strength of the chips, which is good for 3D stacking both for logic and memory. The new Black Diamond is being rapidly adopted by major logic and DRAM chipmakers, Applied says. "The AI era needs more energy-efficient computing, and chip wiring and stacking are critical to performance and power consumption," said Dr. Prabu Raja, President of the Semiconductor Products Group at Applied Materials. "Applied's newest integrated materials solution enables the industry to scale low-resistance copper wiring to the emerging angstrom nodes, while our latest low-k dielectric material simultaneously reduces capacitance and strengthens chips to take 3D stacking to new heights." Sources: Applied Materials (1, 2)

  • Samsung Joins The 60 TB SSD Club, Looking Forward To 120 TB Drives
    on 5. July 2024 at 15:00

    Multiple companies offer high-capacity SSDs, but until recently, only one company offered high-performance 60 TB-class drives with a PCIe interface: Solidigm. As our colleagues from Blocks & Files discovered, Samsung quietly rolled out its BM1743 61.44 TB solid-state drive in mid-June and now envisions 120 TB-class SSDs based on the same platform. Samsung's BM1743 61.44 TB features a proprietary controller and relies on Samsung's 7th Generation V-NAND (3D NAND) QLC memory. Moreover, Samsung believes that its 7th Gen V-NAND 'has the potential to accommodate up to 122.88 TB,'  Samsung plans to offer the BM1743 in two form factors: U.2 for PCIe 4.0 x4 to address traditional servers and E3.S for PCIe 5.0 x4 interfaces to address machines designed to offer maximum storage density. BM1743 can address various applications, including AI training and inference, content delivery networks, and read-intensive workloads. To that end, its write endurance is 0.26 drive writes per day (DWPD) over five years. Regarding performance, Samsung's BM1743 is hardly a champion compared to high-end drives for gaming machines and workstations. The drive can sustainably achieve sequential read speeds of 7,200 MB/s and write speeds of 2,000 MB/s. It can handle up to 1.6 million 4K random reads and 110,000 4K random writes for random operations. Power consumption details for the BM1743 have not been disclosed, though it is expected to be high. Meanwhile, the drive's key selling point is its massive storage density, which likely outweighs concerns over its absolute power efficiency for intended applications, as a 60 TB SSD still consumes less than multiple storage devices offering similar capacity and performance. As noted above, Samsung's BM1743 61.44 TB faces limited competition in the market, so its price will be quite high. For example, Solidigm's D5-P5336 61.44 TB SSD costs $6,905. Other companies, such as Kioxia, Micron, and SK Hynix, have not yet introduced their 60TB-class SSDs, which gives Samsung and Solidigm an edge for now. UPDATE 7/25: We removed mention of Western Digital's 60 TB-class SSDs, as the company does not currently list any such drives on their website

  • The Lian Li EDGE EG1000 1000W ATX 3.1 PSU Review: Power On The Edge
    on 5. July 2024 at 13:00

    Lian Li Industrial Co., Ltd., established in 1983, is a Taiwanese company specializing in the manufacture of computer cases, power supplies, and accessories. They are one of the oldest players in the PC market and are known for their focus on aluminum-based designs. Lian Li produces a range of products aimed at both consumer and industrial markets, with the company's offerings including mid-tower and full-tower cases and more compact cases for smaller builds. Amongst consumers and PC enthusiasts, Lian Li's products are recognized for their build quality, modularity, and innovative features, catering to a diverse set of needs in the PC building community. This review focuses on the latest addition to Lian Li's PSU lineup: the EG1000 Platinum ATX 3.1 PSU. This power supply unit partially complies with the ATX 3.1 design guide (the paragraphs related to electrical quality and performance). It is designed to meet the demanding requirements of modern gaming PCs, with its specifications indicating good efficiency and robust power delivery. Featuring fully modular cables with individually sleeved wires, dynamic fan control for optimal cooling, and advanced internal topologies, the EG1000 Platinum aims to provide both reliability and performance. However, behind its long list of features, the highlight of the EG1000 Platinum is the shape of the chassis itself, which forgoes the ATX cuboid shape and standard length.

  • Kioxia's High-Performance 3D QLC NAND Enables High-End High-Capacity SSDs
    on 5. July 2024 at 13:00

    This week, Kioxia introduced its new 3D QLC NAND devices aimed at high-performance, high-capacity drives that could redefine what we typically expect from QLC-based SSDs. The components are 1 Tb and 2 Tb 3D QLC NAND ICs with a 3600 MT/s interface speed that could enable M.2-2230 SSDs with a 4 TB capacity and decent performance. Kioxia's 1 Tb (128 MB) and 2 Tb (256 TB) 3D QLC NAND devices are made on the company's BICS 8 process technology and feature 238 active layers as well as CMOS directly Bonded to Array (CBA) design, which implies that CMOS (including interface and buffers circuitry) is built on a specialized node and bonded to the memory array. Such a manufacturing process enabled Kioxia (and its manufacturing partner Western Digital) to achieve a particularly high interface speed of 3600 MT/s. In addition to being one of the industry's first 2 Tb QLC NAND devices, the component features a 70% higher write power efficiency compared to Kioxia's BICS 5 3D QLC NAND devices, which is a bit vague statement as the new ICs have higher capacity and performance in general. This feature will be valuable for data centre applications, though I do not expect someone to use 3D QLC memory for write-intensive applications in general. Yet, these devices will be just what the doctor ordered for AI: read-intensive, content distribution, and backup storage. It is interesting to note that Kioxia's 1 Tb 3D QLC NAND, optimized for performance, has a 30% faster sequential write performance and a 15% lower read latency than the 2 Tb 3D QLC component. These qualities (alongside a 3600 MT/s interface) promise to make Kioxia's 1 Tb 3D QLC competitive even for higher-end PCIe Gen5 x4 SSDs, which currently exclusively use 3D TLC memory. The remarkable storage density of Kioxia's 2Tb 3D QLC NAND devices will allow customers to create high-capacity SSDs in compact form factors. For instance, a 16-Hi stacked package (measuring 11.5 mm × 13.5 mm × 1.5 mm) can be used to build a 4TB M.2-2230 drive or a 16TB M.2-2280 drive. Even a single 16-Hi package could be enough to build a particularly fast client SSD. Kioxia is now sampling its 2 Tb 3D QLC NAND BiCS 8 memory with customers, such as Pure Storage. "We have a long-standing relationship with Kioxia and are delighted to incorporate their eighth-generation BiCS Flash 2Tb QLC flash memory products to enhance the performance and efficiency of our all-flash storage solutions," said Charles Giancarlo, CEO of Pure Storage. "Pure's unified all-flash data storage platform is able to meet the demanding needs of artificial intelligence as well as the aggressive costs of backup storage. Backed by Kioxia technology, Pure Storage will continue to offer unmatched performance, power efficiency, and reliability, delivering exceptional value to our customers." "We are pleased to be shipping samples of our new 2Tb QLC with the new eighth-generation BiCS flash technology," said Hideshi Miyajima, CTO of Kioxia. "With its industry-leading high bit density, high speed data transfer, and superior power efficiency, the 2Tb QLC product will offer new value for rapidly emerging AI applications and large storage applications demanding power and space savings." There is no word on when the 1 Tb 3D QLC BiCS 8 memory will be sampled or released to the market.

  • Noctua Launches New Flagship Cooler: NH-D15 G2 with LGA1851 CPUs Support
    on 3. July 2024 at 13:30

    On Tuesday, Noctua introduced its second-generation NH-D15 cooler, which offers refined performance and formally supports Intel's next-generation Arrow Lake-S processors in LGA1851 packaging. Alongside its NH-D15 G2 CPU cooler, Noctua also introduced its NF-A14x25r G2 140mm fans.  The Noctua NH-D15 G2 is an enhanced version of the popular NH-D15 cooler with eight heat pipes, two asymmetrical fin-stack and two speed-offset 140-mm PWM fans (to avoid acoustic interaction phenomena such as periodic humming or intermittent vibrations). According to the manufacturer, these key components are tailored to work efficiently together to deliver superior quiet cooling performance, rivalling many all-in-one water cooling systems and pushing the boundaries of air cooling efficiency. Noctua offers the NH-D15 G2 in three versions to address the specific requirements of modern CPUs. The regular version is versatile and can be used for AMD's AM5 processors and Intel's LGA1700 CPUs with included mounting accessories. The HBC (High Base Convexity) variant is tailored for LGA1700 processors, especially those subjected to full ILM pressure or those that have deformed over time, ensuring excellent contact quality despite the concave shape of the CPU. Finally, the LBC (Low Base Convexity) version is tailored for flat rectangular CPUs, providing optimal contact on AMD's AM5 and other similar processors. While there are three versions of NH-D15 G2 aimed at different processors, they are all said to be compatible with a wide range of motherboards and other hardware. The new coolers' offset construction ensures clearance for the top PCIe x16 slot on most current motherboards. Additionally, they feature the upgraded Torx-based SecuFirm2+ multi-socket mounting system and come with Noctua's NT-H2 thermal compound. For those looking to upgrade existing coolers like the NH-D15, NH-D15S, or NH-U14S series, Noctua is also releasing the NF-A14x25r G2 fans separately. These round-frame fans are fine-tuned in single and dual fan packages to minimize noise levels while offering decent cooling performance. Finally, Noctua is also prepping a square-frame version of the NF-A14x25 G2 fan for release in September. This variant targets water-cooling radiators and case-cooling applications and promises to extend the versatility of Noctua's cooling solutions further. All versions of Noctua's NH-D15 G2 coolers cost $149.90/€149.90. One NF-A14x25 G2 fan costs $39.90/€39.90, whereas a package of two fans costs $79.80/€79.80. The cooler is backed with a six-year warranty.

  • The Enermax PlatiGemini 1200W ATX 3.1 + ATX12VO PSU Review: The Swiss Army Knife
    on 2. July 2024 at 12:00

    In the retail PC PSU space, most of the focus on new standards and their capabilities in the past couple of years has been on ATX 3.0 and it's quick follow-up successor, ATX 3.1. And while the revised ATX standard is certainly the most important new standard for the rank-and-file PC builder, it's not the only standard that has been released as of late. Intel and its partners have also developed a standard, that in some respects, goes even farther out by dropping some of the legacy aspects of ATX and its increasingly esoteric secondary voltages: ATX12VO. Short for "ATX 12 Volts Only", ATX12VO is a standard that's been slower to take off as it makes a pretty hard break with backwards compatibility. But with so many motherboard functions running off of 12V (CPUs and GPUs, for a start), the need for a PSU to provide secondary voltages like 3.3V and 5V just aren't what they once were 20 years ago - or even 10. So we've slowly seen PC manufacturers and motherboard makers test the waters, with a handful of designs using the more petite ATX standard. Meanwhile on the power supply side of things, the outcome has been a bit more interesting, if messy. While ATX12VO motherboards need matching PSUs, there's nothing to say that such a PSU can only be ATX12VO. To reference an ancient meme, the thought at some PSU manufacturers has been "why not both?", leading to high-end PSUs that can bridge the compatibility gap by offering both ATX 3.1 and ATX12VO compatibility. The first example of such a PSU to make it in our labs is Enermax's new PlatiGemini 1200W PSU. Designed to be the Swiss knife of modern top-tier PCs, Enermax's PSU offers support for both ATX 3.1 and ATX12VO - ensuring it can power virtually any PC - while driving both modes with a sizeable 1200W design that can pretty much power virtually any desktop PC one can hope to build today. Plus, with features like fully modular cables with per-wire sleeving, a dynamic hybrid fan control for optimal cooling, and advanced power topologies, the PlatiGemini 1200W aims to deliver both reliability and performance on top of its multi-mode compatibility. The end result is a very interesting (if premium) product that can do it all.

  • SK hynix Wraps up Dev Work on High-End PCB01 PCIe 5.0 SSD for OEMs, Launching Later This Year
    on 29. June 2024 at 15:00

    SK hynix early in Friday announced that the company has finished the development of it's PCB01 PCIe Gen5 SSD, the company's forthcoming high-end SSD for OEMs. Based on the company's new Alistar platform, the PCB01 is designed to deliver chart-topping performance for client machines. And, as a sign of the times, SK hynix is positioning the PCB01 for AI PCs, looking to synergize with the overall industry interest in anything and everything AI. The bare, OEM-focused drives have previously been shown off by SK hynix, and make no attempt to hide what's under the hood. The PCB01 relies on SK hynix's Alistar controller, which features a PCIe Gen5 x4 host interface on the front end and eight NAND channels on the back end, placing it solidly in the realm of high-end SSDs. Paired with the Alistar controller is the company's latest 238-layer TLC NAND (H25T1TD48C & H25T2TD88C), which offers a maximum transfer speed of 2400 MT/second. Being that this is a high-end client SSD, there's also a DRAM chip on board, though the company isn't disclosing its capacity. As with other high-end PCIe 5.0 client SSDs, SK hynix is planning on hitting peak read speeds of up to 14GB/second on the drive, while peak sequential write speeds should top 12GB/second (with pSLC caching, of course) – performance figures well within the realm of possibility for an 8 channel drive. As for random performance, at Computex the company was telling attendees that the drives should be able to sustain 4K random read and write rates of 2 million IOPS, which is very high as well. The SSDs are also said to consume up to 30% less power than 'predecessors,' according to SK hynix, though the company didn't elaborate on that figure. Typically in the storage industry, energy figures are based on iso-performance (rather than peak performance) – essentially measuring energy efficiency per bit rather than toal power consumption – and that is likely the case here as well. At least initially, SK Hynix plans to release its PCB01 in three capacities – 512 GB, 1 TB, and 2 TB. The company has previously disclosed that their 238L TLC NAND has a capacity of 512Gbit, so these are typical capacity figures for single-sided drives. And while the focus of the company's press release this week was on OEM drives, this is the same controller and NAND that is also going into the company's previously-teased retail Platinum P51 SSD, so this week's reveal offers a bit more detail into what to expect from that drive family as well. Specs aside, Ahn Hyun, the Head of the N-S Committee at SK hynix, said that multiple global CPU providers for on-device AI PCs are seeking collaboration for the compatibility validation process, which is underway, so expect PCB01 drives inside PCs in this back-to-school and holiday seasons. "We will work towards enhancing our leadership as the global top AI memory provider also in the NAND solution space by successfully completing the customer validation and mass production of PCB01, which will be in the limelight," Ahn Hyun said.

  • The Lian Li Hydroshift LCD 360S AIO Cooler Review: Sleek, Stylish, and Lively
    on 28. June 2024 at 13:00

    Among the packed field of PC hardware manufacturers, Lian Li is a company that arguably shouldn't even need an introduction. The quirky company has developed a devout following thanks to its focus on premium-quality aluminum computer cases that, more often that not, and come with in some rather unique designs. Over the years, the company has developed a solid reputation for its meticulous craftsmanship, durability, and elegant aesthetics. And consequently, when the company made the decision to expand beyond aluminum cases and in to other PC peripherals, that development attracted quite a bit of attention to see what kind of a touch Lian Li could bring to the rest of the PC ecosystem. Lian Li's focus on premium products means that the company doesn't really make much in the way of products that are merely basic, and that kind of mentality has extended beyond cases and into the rest of their peripherals. Case in point is the subject of today's review: Lian Li's new all-in-one CPU cooler, the HydroShift LCD 360S AIO. Not content just to make a powerful 360 mm cooler, Lian Li has gone a step above by integrating recesses and other features to help hide the tubing around the cooler, and then for the coup de grace, added a high-quality 2.88-inch IPS display to the pump block. Overall, this new product marks a significant milestone for Lian Li, as it combines advanced cooling technology with the company's signature aesthetic appeal, making for a cooler that's aimed at both enthusiasts and professional users seeking high-end thermal performance and visual customization. Overall, the HydroShift LCD 360S is undeniably designed first and foremost with aesthetics in mind, but the shiny pump block is backed up with one of the most powerful 360 mm cooler designs on the market today. So Lian Li is throwing everything they have at the new HydroShift coolers. Overall, the 360S is part of a trio of HydroShift 360 mm coolers the company is launching this summer. All three share a similar design, although with some pump changes and the addition of RGB fan lighting, depending on the specific model, with the 360S effectively serving as the base model.

  • Micron: U.S. Fabs Will Start Operating in 2026 - 2029
    on 27. June 2024 at 13:00

    When Micron announced plans to build two new fabs in the U.S. in 2022, the company vaguely said both would come online by the decade's end. Then, in 2023, it began to optimize its spending, which pushed production at these fabrication facilities. This week, the company outlined more precise timeframes for when its fabs in Idaho and New York will start operations: this will happen from calendar 2026 to calendar 2029. "These fab construction investments are necessary to support supply growth for the latter half of this decade," a statement by Micron in its Q3 FY2024 financial results report reads. "This Idaho fab will not contribute to meaningful bit supply until fiscal 2027 and the New York construction capex is not expected to contribute to bit supply growth until fiscal 2028 or later. The timing of future [wafer fab equipment] spend in these fabs will be managed to align supply growth with expected demand growth." Micron's fiscal year 2027 starts in September 2026, so the new fab near Boise, Idaho, is set to start operations between September 2026 and September 2027. The company's fiscal 2028 starts in September 2027, so the fab will likely begin operations in calendar 2028 or later, probably depending on the demand for DRAM memory in the coming years. That said, Micron's U.S. memory fabs will begin operations between late 2026 and 2029, which aligns with the company's original plans.  Construction of the fab in Idaho is well underway. In contrast, construction of the New York facility has yet to begin as the company is working on regulatory and permitting processes in the state.  Micron's capital expenditure (CaPex) plan for FY2024 is approximately $8.0 billion, with a decrease in year-over-year spending on wafer fabrication equipment (WFE). In Q4 FY2024, the company will spend around $3 billion on fab construction, new wafer fab tools, and various expansions/upgrades. Looking ahead to FY2025, the company plans a substantial increase in capex, targeting a mid-30s percentage of revenue to support various technological and facility advancements. In particular, it expects its quarterly CapEx to average above the $3 billion level seen in the fourth quarter of FY2024, which means that it plans to spend about $12 billion in its fiscal 2025, which begins in late September. Half or more of the total CapEx increase in FY2025 (i.e., over $2 billion) will be allocated to constructing new fabs in Idaho and New York. Meanwhile, the FY2025 CapEx will significantly rise to fund high-bandwidth memory (HBM) assembly and testing and the construction of fabrication and back-end facilities. This increase also includes investments in technology transitions to meet growing demand.  "Fab construction in Idaho is underway, and we are working diligently to complete the regulatory and permitting processes in New York," said Sanjay Mehrotra, chief executive officer of Micron, at the company's conference call with investors and financial analysts (via SeekingAlpha). "This additional leading-edge greenfield capacity, along with continued technology transition investments in our Asia facilities, is required to meet long-term demand in the second half of this decade and beyond. These investments support our objective to maintain our current bit share over time and to grow our memory bit supply in line with long-term industry bit demand."

  • Frore Unveils Waterproof AirJet Mini Sport for Smartphones
    on 26. June 2024 at 23:00

    Over the past couple of years, Frore Systems has demonstrated several ways that its AirJet solid-state active cooling systems can be used to improve cooling in fanless devices like laptops, tablets, SSDs, and edge computing devices. But there are a subset of those applications that need their cooling options to also be waterproof, and Frore is looking to address those as well. To that end, this week Frore introduced its AirJet Mini Sport, a waterproof, IP68-rated solid-state cooling device that is aimed at use in smartphones and action cameras. Introduced at MWC Shanghai to attract attention of China-based handset vendors, edge and industrial computing devices, and action cameras, the AirJet Mini Sport is an enhanced version of Frore's AirJet Mini Slim. This version has been fully waterproofed, offering IP68-level protection that allows it to work while being submerged in over 1.5 meters of water for up to 30 minutes. Internally, the AirJet Mini Sport can effectively dissipate 5.25 Watts of heat by generating 1750 Pascals of back pressure, while consuming 1 Watt of energy itself. Elsewhere, Frore claims that the AirJet Mini Sport can be used to provide 2.5 Watts of cooling capacity to smartphones. Which, although not enough to cover the complete power consumption/heat dissipation of a high-end SoC, would have a significant impact on both burst and steady-state performance by allowing those chips to run at peak clocks and power for longer periods of time. To ensure consistent performance of Frore's AirJet Mini Sport in diverse environments, the cooling device includes features such as dust resistance and self-cleaning. In addition, just like AirJet Mini Slim, the Sport-badged version its own thermal sensor to control its own operation and maintain optimal performance. As a result, Frore claims that smartphones and action cameras with the AirJet Mini Sport can achieve up to 80% better performance. "We are excited to announce the waterproof AirJet Mini Sport," said Dr. Seshu Madhavapeddy, founder and CEO of Frore Systems. "Consumers demand increased performance in compact devices they can use anywhere, on land or in water. AirJet unleashes device performance, now enabling users to do more with their IP68 dustproof and waterproof devices."

  • NVIDIA's AD102 GPU Pops Up in MSI GeForce RTX 4070 Ti Super Cards
    on 26. June 2024 at 14:00

    As GPU families enter the later part of their lifecycles, we often see chip manufacturers start to offload stockpiles of salvaged chips that, for one reason or another, didn't make the grade for the tier of cards they normally are used in. These recovered chips are fairly unremarkable overall, but they are unsold silicon that still works and has economic value, leading to them being used in lower-tier cards so that they can be sold. And, judging by the appearance of a new video card design from MSI, it looks like NVIDIA's Ada Lovelace generation of chips has reached that stage, as the Taiwanese video card maker has put out a new GeForce RTX 4070 Ti Super card based on a salvaged AD102 GPU. Typically based on NVIDIA's AD103 GPU, NVIDIA's GeForce RTX 4070 Ti Super series sits a step below the company's flagship RTX 4080/4090 cards, both of which are based on the bigger and badder AD102 chip. But with some number of AD102 chips inevitably failing to live up to RTX 4080 specifications, rather than being thrown out, these chips can instead be used to make RTX 4070 cards. Which is exactly what MSI has done with their new GeForce RTX 4070 Ti Super Ventus 3X Black OC graphics card. The card itself is relatively unremarkable – using a binned AD102 chip doesn't come with any advantages, and it should perform just like regular AD103 cards – and for that reason, video card vendors rarely publicly note when they're doing a run of cards with a binned-down version of a bigger chip. However, these larger chips have a tell-tale PCB footprint that usually makes it obvious what's going on. Which, as first noticed by @wxnod, is exactly what's going on with MSI's card. Ada Lovelace Lineup: MSI GeForce RTX 4070 TiS (AD103), RTX 4070 TiS (AD102), & RTX 4090 (AD102) The tell, in this case, is the rear board shot provided by MSI. The larger AD102 GPU uses an equally larger mounting bracket, and is paired with a slightly more complex array of filtering capacitors on the back side of the board PCB. Ultimately, since these are visible in MSI's photos of their GeForce RTX 4070 Ti Super Ventus 3X Black OC, it's easy to compare it to other video cards and see that it has exactly the same capacitor layout as MSI's GeForce RTX 4090, thus confirming the use of an AD102 GPU. Chip curiosities aside, all of NVIDIA GeForce RTX 4070 Ti Super graphics cards – no matter whether they are based on the AD102 or AD103 GPU – come with a GPU with 8,448 active CUDA cores and 16 GB of GDDR6X memory, so it doesn't (typically) matter which chip they carry. Otherwise, compared to a fully-enabled AD102 chip, the RTX 4070 Ti Super specifications are relatively modest, with fewer than half as many CUDA cores, underscoring how the AD102 chip being used in MSI's card is a pretty heavy salvage bin. As for the rest of the card, MSI GeForce RTX 4070 Ti Super Ventus 3X Black OC is a relatively hefty card overall, with a cooling system to match. Being overclocked, the Ventus also has a slightly higher TDP than normal GeForce RTX 4070 Ti Super cards, weighing in at 295 Watts, or 10 Watts above baseline cards. Meanwhile, MSI is apparently not the only video card manufacturer using salvaged AD102 chips for GeForce RTX 4070 Ti Super, either. @wxnod has also posted a screenshot obtained on an Inno3D GeForce RTX 4070 Ti Super based on an AD102 GPU. Sources: MSI, @wxnod

  • Two Is Better Than One: LG Starts Production of 13-inch Tandem OLED Display for Laptops
    on 25. June 2024 at 13:00

    OLED panels have a number of advantages, including deep blacks, fast response times, and energy efficiency; most of these stemming from the fact that they do not need backlighting. However they also have drawbacks, as well, as trying to drive them to be as bright as a high-tier LCD will quickly wear out the organic material used. Researchers have been spending the past couple of decades developing ways to prolong the lifespans of OLED materials, and recently LG has put together a novel (if brute force) solution: halve the work by doubling the number of pixels. This is the basis of the company's new tandem OLED technology, which has recently gone into mass production. The Tandem OLED technology introduced by LG Display uses two stacks of red, green, and blue (RGB) organic light-emitting layers, which are layered on top fo each other, essentially reducing how bright each layer needs to individually be in order to hit a specific cumulative brightness. By combining multiple OLED pixels running at a lower brightness, tandem OLED displays are intended to offer higher brightness and durability than traditional single panel OLED displays, reducing the wear on the organic materials in normal situations – and by extension, making it possible to crank up the brightness of the panels well beyond what a single panel could sustain without cooking itself. Overall, LG claims that tandem panels can hit over three-times the brightness of standard OLED panels. The switch to tandem panels also comes with energy efficiency benefits, as the power consumption of OLED pixels is not linear with the output brightness.  According to LG, their tandem panels consume up to 40% less power. More interesting from the manufacturing side of matters, LG's tandem panel stack is 40% thinner (and 28%) lighter than existing OLED laptop screens, despite having to get a whole second layer of pixels in there. In terms of specifications, the 13-inch tandem OLED panel feature a WQXGA+ (2880×1800) resolution and can cover 100% of the DCI-P3 color gamut. The panel is also certified to meet VESA's Display HDR True Black 500 requirements, which among other things, requires that it can hit 500 nits of brightness. And given that this tech is meant to go into tablets and laptops, it shouldn't come as any surprise that the display panel is also touch sensitive, as well. "We will continue to strengthen the competitiveness of OLED products for IT applications and offer differentiated customer value based on distinctive strengths of Tandem OLED, such as long life, high brightness, and low power consumption," said Jae-Won Jang, Vice President and Head of the Medium Display Product Planning Division at LG Display. Without any doubts, LG's Tandem OLED display panel looks impressive. The company is banking on it doing well in the high-end laptop and tablet markets, where manufacturers have been somewhat hesitant to embrace OLED displays due to power concerns. The technology has already been adopted by Apple for their most recent iPad Pro tablets, and now LG is making it available to a wider group of OEMs. What remains to be seen is the technology's cost. Computer-grade OLED panels are already a more expensive  option, and this one ups the ante with two layers of OLED pixels. So it isn't a question of whether it will be reserved for premium, high-margin devices, but a matter of just how much it will add to the final price tag. For now, LG Display does not disclose which PC OEMs are set to use its 13-inch Tandem OLED panel, though as the company is a supplier to virtually all of the PC OEMs, there's little doubt it should crop up in multiple laptops soon enough.

  • CUDIMM Standard Set to Make Desktop Memory a Bit Smarter and a Lot More Robust
    on 21. June 2024 at 14:30

    While the new CAMM and LPCAMM memory modules for laptops have garnered a great deal of attention in recent months, it's not just the mobile side of the PC memory industry that is looking at changes. The desktop memory market is also coming due for some upgrades to further improve DIMM performance, in the form of a new DIMM variety called the Clocked Unbuffered DIMM (CUDIMM). And while this memory isn't in use quite yet, several memory vendors had their initial CUDIMM products on display at this year's Computex trade show, offering a glimpse into the future of desktop memory. A variation on traditional Unbuffered DIMMs (UDIMMs), Clocked UDIMMs (and Clocked SODIMMs) have been created as another solution to the ongoing signal integrity challenges presented by DDR5 memory. DDR5 allows for rather speedy transfer rates with removable (and easily installed) DIMMs, but further performance increases are running up against the laws of physics when it comes to the electrical challenges of supporting memory on a stick – particularly with so many capacity/performance combinations like we see today. And while those challenges aren't insurmountable, if DDR5 (and eventually, DDR6) are to keep increasing in speed, some changes appear to be needed to produce more electrically robust DIMMs, which is giving rise to the CUDIMM. Standardized by JEDEC earlier this year as JESD323, CUDIMMs tweak the traditional unbuffered DIMM by adding a clock driver (CKD) to the DIMM itself, with the tiny IC responsible for regenerating the clock signal driving the actual memory chips. By generating a clean clock locally on the DIMM (rather than directly using the clock from the CPU, as is the case today), CUDIMMs are designed to offer improved stability and reliability at high memory speeds, combating the electrical issues that would otherwise cause reliability issues at faster memory speeds. In other words, adding a clock driver is the key to keeping DDR5 operating reliably at high clockspeeds. All told, JEDEC is proposing that CUDIMMs be used for DDR5-6400 speeds and higher, with the first version of the specification covering speeds up to DDR5-7200. The new DIMMs will also be drop-in compatible with existing platforms (at least on paper), using the same 288-pin connector as today's standard DDR5 UDIMM and allowing for a relatively smooth transition towards higher DDR5 clockspeeds.

Shopping Cart