Information technologi
News
- TinyTendo Fits NES Hardware Inside Game Boy-Sized Shellon 1. juni 2023 at 13:28
Redherring32 has crammed NES hardware inside of a Game Boy-sized shell for old school gaming on the go.
- TeamGroup Goes Big on SSD Cooling, Demos 120mm AIO Liquid Cooler For M.2 Driveson 1. juni 2023 at 13:00
TeamGroup is demonstrating at Computex 2023 what it claims to be the world's first all-in-one liquid cooling system for hot-running M.2 SSDs. The SSD-sized Siren cooler is meant to ensure that high-end drives offer consistently high performance for prolonged periods, given the propensity for first-generation PCIe 5.0 SSDs to heat up and throttle under sustained heavy write workloads. In a sign of the times in the high-end SSD space, TeamGroup has developed a high-end liquid cooler just for M.2 SSDs. The T-Force Siren GD120S, an all-in-one closed loop liquid cooler with a fairly large M.2 compatible water block and a 120mm radiator. This cooling system will be the company's range-topping cooler for solid-state drives that will guarantee that they are going to hit their maximum performance – by giving them nothing less than an overkill amount of cooling. Image Courtesy TeamGroup For reference, the M.2 spec tops out at a sustained power draw of 14.85W (3.3v @ 4.5A), with momentary excursions as high as 25W. So even with a high-end SSD like a current-generation E26-based drive, the actual cooling needs are limited. However in keeping with true PC style, sometimes you just want to go big – and in those cases there's the Siren. The GD120S's water block itself features a copper block plate, and measures 78x 58 x 23.6mm. It's designed to be mated with M.2 2280 drives; no word on if it'll work on something smaller. The pump is rated for 22db(A) of noise. Meanwhile the radiator is a typical aluminum radiator, and is 136mm thick. That's paired with a 120mm fan that offers ARGB lighting; it runs at a maximum speed of 2200RPM, which translates to a maximum noise level of 39.5db(A). The cooler as a whole has a rated power consumption of 4 Watts. Image Courtesy TeamGroup TeamGroup has been particularly vocal about using liquid cooling for solid-state drives. The company's first liquid-cooled T-Force Cardea Liquid relied on the concept that largely resembled a vapor chamber. Then, the company introduced its T-Force Cardea Liquid II with an all-in-one LCS, but this device has never made it to the market and eventually transformed into a dual CPU and SSD cooler. Now, the company is finally ready to go with a dedicated AIO liquid cooler for M.2 SSDs. Meanwhile, on the slightly more pragmatic side of matters, TeamGroup will alos be offering its T-Force Dark AirFlow Coolers, which are a tamer heatsink and active fan setup. The company has three different models on display, each employing a different heatsink configuration.
- Streacom ZS800 Rethinks PSUs for SFF Buildson 1. juni 2023 at 12:57
Steacom's new ZS800 Hybrid SFX PSU takes a fresh design spin on modularity, cabling, and replaceable cooling fans.
- Diablo IV PC Performance: We're Testing a Bunch of GPUson 1. juni 2023 at 11:00
We're testing Diablo VI on a bunch of graphics cards to see how it runs. We'll be routinely updating the charts once the game goes live.
- This AI writing assistant helps you write a killer resume for $39.99by TechRepublic Academy on 1. juni 2023 at 9:26
Get ahead of the competition next time you apply for a job. The post This AI writing assistant helps you write a killer resume for $39.99 appeared first on TechRepublic.
- TeamGroup Unveils 120mm AIO Liquid Cooler For PCIe 5.0 SSDson 1. juni 2023 at 9:10
TeamGroup announces new PCIe 5.0 SSDs, coolers, and DDR5 memory kits at Computex 2023.
- TechRepublic Premium editorial calendar: IT policies, checklists, toolkits and research for downloadby TechRepublic Staff on 1. juni 2023 at 8:30
TechRepublic Premium content helps you solve your toughest IT issues and jump-start your career or next project. The post TechRepublic Premium editorial calendar: IT policies, checklists, toolkits and research for download appeared first on TechRepublic.
- MSI Shows Meteor Lake-Powered Laptop At Computex 2023on 1. juni 2023 at 4:07
MSI reveals the Prestige 16 Studio/Evo laptop powered by Intel's latest 14th Generation Meteor Lake processor at Computex 2023.
- G.Skill Pyramid PC Has Core i9-13900K Running at 7 GHz, RAM at 10,000 MT/son 1. juni 2023 at 3:44
G.Skill shows off new products at Computex 2023 and some exciting overclocking feats.
- Adata's 1600W PSU Powers Four GeForce RTX 4090 Graphics Cardson 1. juni 2023 at 2:07
Adata's Fusion 1600W power supply promises extreme reliability and controls.
- Adata Demos Next-Gen Memory: CAMM, CXL, and MR-DIMM Moduleson 1. juni 2023 at 0:17
Adata is ready for next-generation platforms with CAMM, CXL, and MR-DIMM memory modules.
- Firmware Backdoor Discovered in Gigabyte Motherboards, 250+ Models Affectedon 31. maj 2023 at 23:46
Cybersecurity company Eclypsium has discovered a backdoor in Gigabyte's firmware that affects 271 different motherboards.
- Bamboo CI/CD tool reviewby Enrique Corrales on 31. maj 2023 at 23:00
A review of the continuous integration and delivery tool, Bamboo CI/CD. Learn about its features, benefits, and pricing. The post Bamboo CI/CD tool review appeared first on TechRepublic.
- 8 best practices for securing your Mac from hackers in 2023by Cory Bohon on 31. maj 2023 at 21:28
Best practices for securing your Mac against potential hacks and security vulnerabilities include enabling the firewall, using strong passwords and encryption, and enabling Lockdown Mode. The post 8 best practices for securing your Mac from hackers in 2023 appeared first on TechRepublic.
- Microsoft PowerToys 0.70.0: A breakdown of Mouse Without Borders and Peek appsby Mark W. Kaelin on 31. maj 2023 at 20:47
PowerToys has two new apps: Mouse Without Borders allows you to control other PCs from a single keyboard, and Peek allows you to preview files in Explorer. The post Microsoft PowerToys 0.70.0: A breakdown of Mouse Without Borders and Peek apps appeared first on TechRepublic.
- NVIDIA announces new class of supercomputer and other AI-focused data center servicesby Megan Crouse on 31. maj 2023 at 19:51
The NVIDIA DGX supercomputer using GH200 Grace Hopper Superchips could be the top of its class. Learn what this and the company’s other announcements mean for enterprise AI and high-performance computing. The post NVIDIA announces new class of supercomputer and other AI-focused data center services appeared first on TechRepublic.
- Adata Details SSD with Self-Contained Liquid Cooling System: Up to 14GB/son 31. maj 2023 at 19:40
Adata's project NeonStorm SSD to use Silicon Motion's SM2508 platform .
- Multiplier Review (2023): Features, Pricing, Alternatives & Moreby Ray Fernandez on 31. maj 2023 at 18:44
Looking for a global employment platform? Read our complete product review to learn more about Multiplier key features, pricing, pros, cons, and alternatives. The post Multiplier Review (2023): Features, Pricing, Alternatives & More appeared first on TechRepublic.
- TSMC Shares More Info on 2nm: New MIM Capacitor and Backside PDN Detailedon 31. maj 2023 at 17:30
TSMC has revealed some additional details about its upcoming N2 and N2P process technology at its European Technology Symposium 2023. Both production nodes are being developed with high-performance computing (HPC) in mind, so, they feature a number of enhancements designed specifically to improve performance. Meanwhile, given the performance-efficiency focus that most chips aim to improve upon, low-power applications will also take advantage of TSMC's N2 nodes as they will naturally improve performance-per-watt compared to predecessors. "N2 is a great fit for the energy efficient computing paradigm that we are in today," said Yujun Li, TSMC's director of business development who is in charge of the foundry's High Performance Computing Business Division, at the company's European Technology Symposium 2023. "The speed and power advantages of N2 over N3 over the entire voltage supply ranges as shown is very consistent, making it suitable for both low-power and high-performance applications at the same time." TSMC's N2 manufacturing node — the foundry's first production nodes to use nanosheet gate-all-around (GAAFET) transistors — promises to increase transistor performance by 10-15% at the same power and complexity, or lower power usage by 25-30% at the same clock speed and transistor count. Power delivery is one of the corner stones when it comes to improving transistor performance and TSMC's N2 and N2P manufacturing processes introduce several interconnects-related innovations to squeeze some additional performance. Furthermore, N2P brings in backside power rail to optimize power delivery and die area. Fighting Resistance One of the innovations that N2 brings to the table is super-high-performance metal-insulator-metal (SHPMIM) capacitor to enhance power supply stability and facilitate on-chip decoupling. TSMC says that the new SHPMIM capacitor offers over 2X higher capacity density compared to its super-high-density metal-insulator-metal (SHDMIM) capacitor introduced several years ago for HPC (which increased capacitance by 4X when compared to previous-generation HDMIM). The new SHPMIM also reduces Rs sheet resistance (Ohm/square) by 50% compared to SHDMIM as well as Rc via resistance by 50% compared to SHDMIM. Yet another way to reduce resistance in the power delivery network has been to rearchitect the redistribution layer (RDL). Starting from its N2 process technology, TSMC will use a copper RDL instead of today's aluminum RDL. A copper RDL will provide a similar RDL pitch, but will reduce sheet resistance by 30% as well as cut down via resistance by 60%. Both SHPMIM and Cu RDL are parts of TSMC's N2 technology that is projected to be used for high volume manufacturing (HVM) in the second half 2025 (presumably very late in 2025). Decoupling Power and I/O Wiring The use of a backside power delivery network (PDN) is a yet another major improvement that will be featured by N2P. General advantages of backside power rail are well known: by separating I/O and power wiring by moving power rails to the back, it is possible to make power wires thicker and therefore reduce via resistances in the back-end-of-line (BEOL), which promises to improve performance and cut down power consumption. Also, decoupling I/O and power wires allows to shrink logic area, which means lower costs. At its Technology Symposium 2023 the company revealed that backside PDN of its N2P will enable 10% to 12% higher performance by reducing IR droop and improving signaling, as well as reducing the logic area by 10% to 15%. Now, of course, such advantages will be more obvious in high-performance CPUs and GPUs that have dense power delivery network and therefore moving it to the back makes a great sense for them. Backside PDN is a part of TSMC's N2P fabrication technology that will enter HVM in late 2026 or early 2027.
- How to make your iPhone or iPad faster and smoother with iOS 16by Cory Bohon on 31. maj 2023 at 14:29
iOS devices are fairly easy to maintain and keep running smoothly, but with a few tips, you can ensure the longevity of your devices and ensure they run just as fast as the day you bought them. The post How to make your iPhone or iPad faster and smoother with iOS 16 appeared first on TechRepublic.
- Threatening botnets can be created with little code experience, Akamai findsby Karl Greenberg on 31. maj 2023 at 14:26
Researchers at Akamai’s Security Intelligence unit find a botnet specimen that reveals how successful DDoS, spam and other cyberattacks can be done with little finesse, knowledge or savvy. The post Threatening botnets can be created with little code experience, Akamai finds appeared first on TechRepublic.
- Last chance: Get lifetime access to Microsoft Office 2021 for just $30by TechRepublic Academy on 31. maj 2023 at 14:00
Whether you're starting a new business venture and need Microsoft Office's help or you just want to get better organized in your personal life, it's a good time to take advantage of this limited-time deal. The post Last chance: Get lifetime access to Microsoft Office 2021 for just $30 appeared first on TechRepublic.
- MSI Intros USB4 PCIe Expansion Card with 100W Power Deliveryon 31. maj 2023 at 11:00
For Computex 2023, MSI is introducing an interesting USB4 PCIe expansion card. The card not only offers two full-bandwidth USB4 40Gbps Type-C ports, but the card can also deliver up to 100W of power to a device connected to it, allowing it to be used to power high-drain devices like laptops. The MSI USB4 PD100W Expansion Card (MS-4489) has two DisplayPort inputs as well as two USB Type-C connectors. The Type-C ports support USB data rates up to40 Gbps, but also supports DP alt mode and USB power delivery. What really makes this card notable are those power delivery capabilities; most USB4/Thunderbolt 4 expansion cards are PCIe bus-powered, and can only deliver up to 15 Watts or so. MSI's card, on the other hand, can deliver up to 100 Watts of power on its best Type-C port, which is enough power for charging a high-performance notebook or powering something demanding (e.g., a display). Meanwhile the card's second Type-C port can deliver up to 27 Watts, which is enough for smartphones and other mid-power periphreals. The card uses a physical PCIe x8 form factor, with what looks to be an electrical x4 interface. MSI has disclosed that it's using a PCIe 4.0 connection, though for they moment they have't disclosed whose USB4 controller they're using. PCIe 4.0 x4 is sufficient to fully drive a 40Gbps port and then some; but it'll fall a bit short of simulaneously driving both ports at their maximum data transfer rates (assuming you even have a workload that can fully saturate the links). Menawhile, as this USB4 host card goes above and beyond the amount of power a PCIe slot can provide, the card also has a six-pin auxiliary PCIe connector to supply the remaining power. Per the PCIe specificaiton, a x4 card can draw up to 25W from the slot, so the 75W auxillery connector brings the card to its 100W limit. Though this also means that if MSI is sticking to the PCIe spec, then they can't deliver a full 100W + 27W at the same time. MSI's USB4 PD100W Expansion Card is mainly aimed at users who need to attach bandwidth demanding peripherals (such as direct attached storage or some professional equipment) and USB-C displays to their desktop PCs. The board will serve equally well both the latest PCs that do not support USB4 connectors (or need extra Type-C ports) and machines that are already is use and need to gain advanced connectivity. MSI does not disclose pricing of its USB4 expansion card or when it is set to be available, though we would expect it to be priced competitively against similar Thunderbolt 3/4 expansion cards that have been available for some time.
- Asus Details ROG Matrix GeForce RTX 4090: Liquid Cooling Meets Liquid Metalon 30. maj 2023 at 23:00
Asus has introduced a new flagship RTX 4090 graphics card that uses an all-in-one liquid cooling system combined with liquid metal thermal interface. Dubbed the ROG Matrix GeForce RTX 4090, Asus says that its advanced cooler combined with extremely efficient thermal interface will ensure the maximum boost clocks possible, with Asus taking clear aim of producing the fastest gaming graphics card on the market. Proper power delivery and efficient cooling are main ways to enable consistently high CPU and GPU performance these days, so when designing its ROG Matrix GeForce RTX 4090, the company used its own proprietary printed circuit board (PCB) with an advanced voltage regulating module (VRM). Meanwhile cooling is being provided by an all-in-one liquid cooling system that removes heat not only from GPU, but also from memory and VRM, exhausting that heat via the attached "extra-thick" 360mm radiator. But Asus says that its ROG Matrix GeForce RTX 4090 has a secret ingredient that its rivals lack: liquid metal thermal interface material (TIM) that ensures superior heat transfer from hot components to cooling systems. Asus does not disclose what type of liquid metal TIM it uses for graphics cards (it uses ThermalGrizzly's Conductonaut Extreme for some laptops), bus usually such thermal interfaces are made from gallium or gallium alloys, which are liquid at or near room temperature and are great conductors of heat. But there are also some risks and challenges associated with using liquid metal thermal interfaces. Firstly, they are electrically conductive, which means that if the material spills or is not properly contained, it could cause a short circuit. Secondly, these materials can be corrosive to certain metals like aluminum. Thirdly, applying liquid metal can be more complicated than using other types of thermal paste, requiring careful handling and precision. Asus says that it has been using liquid metal TIMs in its laptops for years, so using them for graphics cards does not seem to be a big challenge for the company. Image Credit: Future/TechRadar Asus is not disclosing the complete specifications of the ROG Matrix GeForce RTX 4090 for the moment, but it certainly hopes to make the graphics card the world's fastest. It remains to be seen whether the product will indeed be the fastest out-of-box, but it will certainly offer a noteworthy overclocking potential when compared to regular GeForce RTX 4090 graphics boards with regular coolers. The Asus ROG Matrix GeForce RTX 4090 will be a limited-edition card available for sale in Q3.
- Corsair Unveils Dominator Titanium DDR5 Kits: Reaching For DDR5-8000on 30. maj 2023 at 22:00
Corsair has introduced its new Dominator Titanium series of DDR5 memory modules that will combine performance, capacity, and style. The new lineup of memory modules and kits will offer DRAM kits up to 192 GB in capacity at data transfer rates as high as DDR5-8000. The Dominator Titanium DIMMs are based on cherry-picked memory chips and Corsair's own printed circuit boards to ensure signal quality and integrity. Also, these PCBs are supplemented with internal cooling planes and external thermal pads that transfer heat to aluminum heat spreaders, with an aim on keeping the heavily overclocked DRAM sufficiently cooled. With regards to performance, the retail versions of the Titanium kits will run at speeds ranging from DDR5-6000 to DDR5-8000. Which, at the moment, would make the top-end SKUs of the highest clocked DDR5 RAM on the market. Corsair is also promissing kits with CAS latencies as low as CL30, though absent a full product matrix, it's likely those kits will be clocked lower. The DIMMs come equipped with AMD's EXPO (AMD version) and Intel's XMP 3.0 (Intel version) SPD profiles for easier overclocking. As for capacity, the Titanium DIMMs will be available in 16GB, 24GB, 32GB, and 48GB configurations, allowing for kits ranging from 32GB (2 x 16GB) up to 192GB (4x 48GB). Following the usual rule curve for DDR5 memory kits, we'll wager that DDR5-8000 kits won't be avaialble in 192GB capacities – even Intel's DDR5 memory controller has a very hard time with running 4 DIMMs anywhere near that fast – so we're expecting that the fastest kits will be limited to smaller capacities; likely 48GB (2 x 24GB). Corsair is not disclosing whose memory chips it uses for its Dominator Titanium memory modules, but there is a good chance that it uses Micron's latest generation of DDR5 chips, which are available in both 16Gbit and 24Gbit capacities. Micron was the first DRAM vendor to publicly start shipping 24Gbit DRAM chips, so they are the most likely candidate for the first 24GB/48GB DIMMs such as Corsair's. And if that's the case, that would mark an interesting turn-around for Micron; the company's first-generation DDR5 modules are not known for overclocking very well, which is why we haven't been seeing them on current high-end DDR5 kits. Image Credit: Future/TechRadar Corsair has also taken into account aesthetic preferences by incorporating 11 addressable Capellix RGB LEDs into the modules. Users can customize and control these LEDs using Corsair's iCue software. For those favoring minimalism, Corsair offers separate Fin Accessory Kits. These kits replace the RGB top bars with fins, bringing a classic look reminiscent of the original Dominator memory. While Corsair's new Dominator Titanium memory modules are already very fast, to commemorate their debut Corsair plans to release a limited run of First-Edition kits. These exclusive kits will feature even higher clocks and tighter timings – likely running at DDR5-8266 speeds, which Corsair is showing off at Computex. Corsair intends to offer only 500 individually numbered First-Edition kits. Corsair plans to start selling its Dominator Titanium kits in July. Pricing will depend on market conditions, but expect these DIMMs to carry a premium price tags. Gallery: Corsair Unveils Dominator Titanium: Up to 192GB DDR5-8000 Kits
- SK Hynix Publishes First Info on HBM3E Memory: Ultra-wide HPC Memory to Reach 8 GT/son 30. maj 2023 at 11:00
SK Hynix was one of the key developers of the original HBM memory back in 2014, and the company certainly hopes to stay ahead of the industry with this premium type of DRAM. On Tuesday, buried in a note about qualifying the company's 1bnm fab process, the the manufacturer remarked for the first time that it is working on next-generation HBM3E memory, which will enable speeds of up to 8 Gbps/pin and will be available in 2024. Contemporary HBM3 memory from SK Hynix and other vendors supports data transfer rates up to 6.4Gbps/pin, so HBM3E with an 8 Gbpis/pin transfer rate will provide a moderate, 25% bandwidth advantage over existing memory devices. To put this in context, with a single HBM stack using a 1024-bit wide memory bus, this would give a known good stack die (KGSD) of HBM3E around 1 TB/sec of bandwidth, up from 819.2 GB/sec in case of HBM3 today. Which, with modern HPC-class processors employing half a dozen stacks (or more), would work out to several TB/sec of bandwidth for those high-end processors. According to the company's note, SK Hynix intends to start sampling its HBM3E memory in the coming months, and initiate volume production in 2024. The memory maker did not reveal much in the way of details about HBM3E (in fact, this is the first public mention of its specifications at all), so we do not know whether these devices will be drop-in compatible with existing HBM3 controllers and physical interfaces. HBM Memory Comparison HBM3E HBM3 HBM2E HBM2 Max Capacity ? 24 GB 16 GB 8 GB Max Bandwidth Per Pin 8 Gb/s 6.4 Gb/s 3.6 Gb/s 2.0 Gb/s Number of DRAM ICs per Stack ? 12 8 8 Effective Bus Width 1024-bit Voltage ? 1.1 V 1.2 V 1.2 V Bandwidth per Stack 1 TB/s 819.2 GB/s 460.8 GB/s 256 GB/s Assuming SK hynix's HBM3E development goes according to plan, the company should have little trouble lining up customers for even faster memory. Especially with demand for GPUs going through the roof for use in building AI training and inference systems, NVIDIA and other processor vendors are more than willing to pay premium for advanced memory they need to produce ever faster processors during this boom period in the industry. SK Hynix will be producing HBM3E memory using its 1b nanometer fabrication technology (5th Generation 10nm-class node), which is currently being used to make DDR5-6400 memory chips that are set to be validated for Intel’s next generation Xeon Scalable platform. In addition, the manufacturing technology will be used to make LPDDR5T memory chips that will combine high performance with low power consumption.
- Phison Unveils PS5031-E31T SSD Platform For Lower-Power Mainstream PCIe 5 SSDson 29. maj 2023 at 22:00
At Computex 2023, Phison is introducing a new, lower-cost SSD controller for building mainstream PCIe 5.0 SSDs. The Phison PS5031-E31T is a quad channel, DRAM-less controller for solid-state drives that is designed to offer sequential read/write speeds up to 10,8 GB/s at drive capacities of up to 8 TB, which is in line with some of the fastest PCIe 5.0 SSDs available today. The Phison E31T controller is, at a high level, the lower-cost counterpart to Phison's current high-end PCIe 5.0 SSD controller, the E26. The E31T is based around multiple Arm Cortex R5 cores for realtime operations, and in Phison designs these are traditionally accompanied by special-purpose accelerators that belong to the company's CoXProcessor package. The chip supports Phison's 7th Generation LDPC engine with RAID ECC and 4K code word to handle the latest and upcoming 3D TLC and 3D QLC types of 3D NAND. The controller also supports AES-256, TCG Opal, and TCG Pyrite security. The SSD controller is organized in four NAND channels with 16 chip enable lines (CEs) each, allowing it to address 16 NAND dies per channel. For now Phison is refraining from disclosing NAND interface speeds the controller supports, though given the fact that the controller is set to support sequential read/write throughput of 10,800 MB/s over four channels, napkin math indicates they'll need to support transfer rates of at least 2700 MT/s. This is on the upper-end of current ONFi/Toggle standards, but still readily attained. For example, Kioxia's and Western Digital's latest 218-layer BICS 3D NAND devices support a 3200 MT/s interface speed (which provides a peak sequential read/write speed of 400 MB/s). Phison says that its E31T controller will enable M.2-2280 SSDs with a PCIe 5.0 x4 interface and a capacities of up to 8 TB. Phison's DRAM-less controllers tend to remain in use in SSD designs for quite a while due to their mainstream posiitoning and relatively cheap price, so, unsurprisingly, Phison traditionally opts to plan for the long term with regards to capacity. 8 TB SSDs will eventually come down in price, even if they aren't here quite yet. Phison NVMe SSD Controller Comparison E31T E27T E21T E26 E18 Market Segment Mainstream Consumer High-End Consumer Manufacturing Process 7nm 12nm 12nm 12nm 12nm CPU Cores 1x Cortex R5 1x Cortex R5 1x Cortex R5 2x Cortex R5 3x Cortex R5 Error Correction 7th Gen LDPC 5th Gen LDPC 4th Gen LDPC 5th Gen LDPC 4th Gen LDPC DRAM No No No DDR4, LPDDR4 DDR4 Host Interface PCIe 5.0 x4 PCIe 4.0 x4 PCIe 4.0 x4 PCIe 5.0 x4 PCIe 4.0 x4 NVMe Version NVMe 2.0? NVMe 2.0? NVMe 1.4 NVMe 2.0 NVMe 1.4 NAND Channels, Interface Speed 4 ch, 3200 MT/s? 4 ch, 2400 MT/s? 4 ch, 1600 MT/s 8 ch, 2400 MT/s 8 ch, 1600 MT/s Max Capacity 8 TB 8 TB 4 TB 8 TB 8 TB Sequential Read 10.8 GB/s 7.4 GB/s 5.0 GB/s 14 GB/s 7.4 GB/s Sequential Write 10.8 GB/s 6.7 GB/s 4.5 GB/s 11.8 GB/s 7.0 GB/s 4KB Random Read IOPS 1500k 1200k 780k 1500k 1000k 4KB Random Write IOPS 1500k 1200k 800k 2000k 1000k Compared to the high-end E26 controller, the E31T supports fewer NAND channels and NAND dies overall, but enthusiasts will also want to take note of the manufacturing process Phison is using for the controller. Phison is scheduled to build the E31T on TSMC's 7nm process, which although is no longer-cutting edge, is a full generation ahead of the 12nm process used for the E26. So combined with the reduced complexity of the controller, this should bode well for cooler running and less power-hungry PCIe 5.0 SSDs. The smaller, mainstream-focused chip should also allow for those PCIe 5.0 SSDs to be cheaper. Though, as always, it should be noted that Phison doesn't publicly talk about controller pricing, let alone control what their customers (SSD vendors) charge for their finished drives. As for availability of drives based on Phison's new controller, as Phison has not yet announced an expected sampling date, you shouldn't expect to see E31T drives for a while. Phison typically announces new controllers fairly early in the SSD development process, so there's usually at least a several month gap before finished SSDs hit the market. Case in point: drives based on Phison's PCIe 4.0 E27T controller, which was announced at Computex 2022, are still not available. Otherwise, as Phison's second PCIe 5.0 controller, the E31T should hopefully encounter fewer teething issues than the initial E26, but we'd still expect E31T drives to be 2024 products.
- Intel Discloses New Details On Meteor Lake VPU Block, Lays Out Vision For Client AIon 29. maj 2023 at 13:00
While the first systems based on Intel’s forthcoming Meteor Lake (14th Gen Core) systems are still at least a few months out – and thus just a bit too far out to show off at Computex – Intel is already laying the groundwork for Meteor Lake’s forthcoming launch. For this year’s show, in what’s very quickly become an AI-centric event, Intel is using Computex to lay out their vision of client-side AI inference for the next generation of systems. This includes both some new disclosures about the AI processing hardware that will be in intel’s Meteor Lake hardware, as well as what Intel expects OSes and software developers are going to do with the new capabilities. AI, of course, has quickly become the operative buzzword of the technology industry over the last several months, especially following the public introduction of ChatGPT and the explosion of interest in what’s now being termed “Generative AI”. So like the early adoption stages of other major new compute technologies, hardware and software vendors alike are still in the process of figuring out what can be done with this new technology, and what are the best hardware designs to power it. And behind all of that… let’s just say there’s a lot of potential revenue waiting in the wings for those companies that succeed in this new AI race. Intel for its part is no stranger to AI hardware, though it’s certainly not a field that normally receives top billing at a company best known for its CPUs and fabs (and in that order). Intel’s stable of wholly-owned subsidiaries in this space includes Movidius, who makes low power vision processing units (VPUs), and Habana Labs, responsible for the Gaudi family of high-end deep learning accelerators. But even within Intel’s rank-and-file client products, the company has been including some very basic, ultra-low-power AI-adjacent hardware in the form of their Gaussian & Neural Accelerator (DNA) block for audio processing, which has been in the Core family since the Ice Lake architecture. Still, in 2023 the winds are clearly blowing in the direction of adding even more AI hardware at every level, from the client to the server. So for Computex Intel is disclosing a bit more on their AI efforts for Meteor Lake.
- NVIDIA: Grace Hopper Has Entered Full Production & Announcing DGX GH200 AI Supercomputeron 29. maj 2023 at 11:00
Teeing off an AI-heavy slate of announcements for NVIDIA, the company has confirmed that their Grace Hopper “superchip” has entered full production. The combination of a Grace CPU and Hopper H100 GPU, Grace Hopper is designed to be NVIDIA’s answer for customers who need a more tightly integrated CPU + GPU solution for their workloads – particularly for AI models. In the works for a few years now, Grace Hopper is NVIDIA’s efforts to leverage both their existing strength in the GPU space and newfound efforts in the CPU space to deliver a semi-integrated CPU/GPU product unlike anything their top-line competitors offer. With NVIDIA’s traditional dominance in the GPU space, the company has essentially been working backwards, combining their GPU technology with other types of processors (CPUs, DPUs, etc) in order to access markets that benefit from GPU acceleration, but where fully discrete GPUs may not be the best solution. NVIDIA Grace Hopper Specifications Grace Hopper (GH200) CPU Cores 72 CPU Architecture Arm Neoverse V2 CPU Memory Capacity <=480GB LPDDR5X (ECC) CPU Memory Bandwidth <=512GB/sec GPU SMs 132 GPU Tensor Cores 528 GPU Architecture Hopper GPU Memory Capcity <=96GB GPU Memory Bandwidth <=4TB/sec GPU-to-CPU Interface 900GB/sec NVLink 4 TDP 450W - 1000W Manufacturing Process TSMC 4N Interface Superchip In this first NVIDIA HPC CPU + GPU mash-up, the Hopper GPU is the known side of the equation. While it only started shipping in appreciable volumes this year, NVIDIA was detailing the Hopper architecture and performance expectations over a year ago. Based on the 80B transistor GH100 GPU, H100 brings just shy of 1 PFLOPS of FP16 matrix math throughput for AI workloads, as well as 80GB of HBM3 memory. H100 is itself already a huge success – thanks to the explosion of ChatGPT and other generative AI services, NVIDIA is already selling everything they can make – but NVIDIA is still pushing ahead with their efforts to break into markets where the workloads require closer CPU/GPU integration. Being paired with H100, in turn, is NVIDIA’s Grace CPU, which itself just entered full production a couple of months ago. The Arm Neoverse V2-based chip packs 72 CPU cores, and comes with up to 480GB of LPDDR5X memory. And while the CPU cores are themselves plenty interesting, the bigger twist with Grace has been NVIDIA’s decision to co-package the CPU with LPDDR5X, rather than using slotted DIMMs. The on-package memory has allowed NVIDIA to use both higher clocked and lower power memory – at the cost of expandability – which makes Grace unlike any other HPC-class CPU on the market. And potentially a very big deal for Large Language Model (LLM) training, given the emphasis on both dataset sizes and the memory bandwidth needed to shuffle that data around. It’s that data shuffling, in turn, that helps to define a single Grace Hopper board as something more than just a CPU and GPU glued together on the same board. Because NVIDIA equipped Grace with NVLink support – NVIDIA’s proprietary high-bandwidth chip interconnect – Grace and Hopper have a much faster interconnect than a traditional, PCIe-based CPU + GPU setup. The resulting NVLink Chip-to-Chip (C2C) link offers 900GB/second of bandwidth between the two chips (450GB/sec in each direction), giving Hopper the ability to talk back to Grace even faster than Grace can read or write to its own memory. The resulting board, which NVIDIA calls their GH200 “superchip”, is meant to be NVIDIA’s answer to the AI and HPC markets for the next product cycle. For customers who need a more local CPU than a traditional CPU + GPU setup – or perhaps more pointedly, more quasi-local memory than a stand-alone GPU can be equipped with – Grace Hopper is NVIDIA’s most comprehensive compute product yet. Meanwhile, with there being some uncertainty over just how prevalent the Grace-only (CPU-only) superchip will be, given that NVIDIA is currently on an AI bender, Grace Hopper may very well end up being where we see the most of Grace, as well. According to NVIDIA, systems incorporating GH200 chips are slated to be available later this year. DGX GH200 AI Supercomputer: Grace Hopper Goes Straight To the Big Leagues Meanwhile, even though Grace Hopper is not technically out the door yet, NVIDIA is already at work building its first DGX system around the chip. Though in this case, “DGX” may be a bit of a misnomer for the system, which unlike other DGX systems isn’t a single node, but rather a full-on multi-rack computational cluster – hence NVIDIA terming it a “supercomputer.” At a high level, the DGX GH200 AI Supercomputer is a complete, turn-key, 256 node GH200 cluster. Spanning some 24 racks, a single DGX GH200 contains 256 GH200 chips – and thus, 256 Grace CPUs and 256 H100 GPUs – as well as all of the networking hardware needed to interlink the systems for operation. In cumulative total, a DGX GH200 cluster offers 120TB of CPU-attached memory, another 24TB of GPU-attached memory, and a total of 1 EFLOPS of FP8 throughput (with sparsity). Look Closer: That's Not a Server Node - That's 24 Server Racks Linking the nodes together is a two-layer networking system built around NVLink. 96 local, L1 switches provide immediate communications between the GH200 blades, while another 36 L2 switches provide a second layer of connectivity tying together the L1 switches. And if that’s not enough scalability for you, DGX GH200 clusters can be further scaled up in size by using InfiniBand, which is present in the cluster as part of NVIDIA’s use of ConnectX-7 network adapters. The target market for the sizable silicon cluster is training large AI models. NVIDIA is leaning heavily on their existing hardware and toolsets in the field, combined with the sheer amount of memory and memory bandwidth a 256-node cluster affords to be able to accommodate some of the largest AI models around. The recent explosion in interest in large language models has exposed just how much memory capacity is a constraining factor, so this is NVIDIA’s attempt to offer a single-vendor, integrated solution for customers with especially large models. And while not explicitly disclosed by NVIDIA, in a sign that they all pulling out all of the stops for the DGX GH200 cluster, the memory capacities they’ve listed indicate that NVIDIA isn’t just shipping regular H100 GPUs as part of the system, but rather they are using their limited availability 96GB models, which have the normally-disabled 6th stack of HBM3 memory enabled. So far, NVIDIA only offers these H100 variants in a handful of products – the specialty H100 NVL PCIe card and now in some GH200 configurations – so DGX GH200 is slated to get some of NVIDIA’s best silicon. Of course, don’t expect a supercomputer from NVIDIA to come cheaply. While NVIDIA is not announcing any pricing this far in advance, based on HGX H100 board pricing (8x H100s on a carrier board for $200K), a single DGX GH200 is easily going to cost somewhere in the low 8 digits. Suffice it to say, DGX GH200 is aimed at a rather specific subset of Enterprise clientele – those who need to do a lot of large model training and have the deep pocketbooks to pay for a complete, turn-key solution. Ultimately, however, DGX GH200 isn’t just meant to be a high-end system for NVIDIA to sell to deep-pocketed customers, but it’s the blueprint for helping their hyperscaler customers build their own GH200-based clusters. Building such a system is, after all, the best way to demonstrate how it works and how well it works, so NVIDIA is forging their own path in this regard. And while NVIDIA would no doubt be happy to sell a whole lot of these DGX systems directly, so long as it gets hyperscalers, CSPs, and others adopting GH200 in large numbers (and not, say, rival products), then that’s still going to be a win in NVIDIA’s books. In the meantime, for the handful of businesses that can afford a DGX GH200 AI Supercomputer, according to NVIDIA the systems will be available by the end of the year.
- Arm Unveils 2023 Mobile CPU Core Designs: Cortex-X4, A720, and A520 - the Armv9.2 Familyon 29. maj 2023 at 0:30
Throughout the world, if there's one universal constant in the smartphone and mobile device market, it's Arm. Whether it's mobile chip makers basing their SoCs on Arm's fully synthesized CPU cores, or just relying on the Arm ISA and designing their own chips, at the end of the day, Arm underlies virtually all of it. That kind of market saturation and relevance is a testament to all of the hard work that Arm has done in the last few decades getting to this point, but it's also a grave responsibility – for most mobile SoCs, their performance only moves forward as quickly as Arm's own CPU core designs and associated IP do. Consequently, we've seen Arm settle into a yearly cadence for their client IP, and this year is no exception. Timed to align with this year's Computex trade show in Taiwan, Arm is showing off a new set of Cortex-A and Cortex-X series CPU cores – as well as a new generation of GPU designs – which we'll see carrying the torch for Arm starting later this year and into 2024. These include the flagship Cortex-X4 core, as well as Arm's mid-core Cortex-A720. and the new little-core Cortex-A520. Arm's latest CPU cores build upon the foundation of Armv9 and their Total Compute Solution (TSC21/22) ecosystem. For their 2023 IP, Arm is rolling out a wave of minor microarchitectural improvements through its Cortex line of cores with subtle changes designed to push efficiency and performance throughout, all the while moving entirely to the AArch64 64-bit instruction set. The latest CPU designs from Arm are also designed to align with the ongoing industry-wide drive towards improved security, and while these features aren't strictly end-user facing, it does underscore how Arm's generational improvements are to more than just performance and power efficiency. In addition to refining its CPU cores, Arm has undertaken a comprehensive upgrade of its DynamIQ Shared Unit core complex block, with the DSU-120. Although the modifications introduced are subtle, they hold substantial significance in terms of improving the efficiency of the fabric holding Arm CPU cores together, along with extending Arm's reach even further in terms of performance scalability with support for up to 14 CPU cores in a single block – a move designed to make Cortex-A/X even better suited for laptops.
- TSMC Preps 6x Reticle Size Super Carrier Interposer for Extreme SiP Processorson 26. maj 2023 at 19:15
As part of their efforts to push the boundaries on the largest manufacturable chip sizes, Taiwan Semiconductor Manufacturing Co. is working on its new Chip-On-Wafer-On-Substrate-L (CoWoS-L) packaging technology that will allow it to build larger Super Carrier interposers. Aimed at the 2025 time span, the next generation of TSMC's CoWoS technology will allow for interposers reaching up to six times TSMC's maximum reticle size, up from 3.3x for their current interposers. Such formidable system-in-packages (SiP) are intended for use by performance-hungry data center and HPC chips, a niche market that has proven willing to pay significant premiums to be able to place multiple high performance chiplets on a single package. "We are currently developing a 6x reticle size CoWoS-L technology with Super Carrier interposer technology," said said Yujun Li, TSMC's director of business development who is in charge of the foundry's High Performance Computing Business Division, at the company's European Technology Symposium 2023. Global megatrends like artificial intelligence (AI) and high-performance computing (HPC) have created demand for seemingly infinite amounts of compute horsepower, which is why companies like AMD, Intel, and NVIDIA are building extremely complex processors to address those AI and HPC applications. One of the ways to increase compute capabilities of processors is to increase their transistor count; and to do so efficiently these days, companies use multi-tile chiplet designs. Intel's impressive, 47 tile Ponte Vecchio GPU is a good example of such designs; but TSMC's CoWoS-L packaging technology will enable the foundry to build Super Carrier interposers for even more gargantuan processors. The theoretical EUV reticle limit is 858mm2 (26 mm by 33 mm), so six of these masks would enable SiPs of 5148 mm2. Such a large interposer would not only afford room for multiple large compute chiplets, but it also leaves plenty of room for things like 12 stacks of HBM3 (or HBM4) memory, which means a 12288-bit memory interface with bandwidth reaching as high as 9.8 TB/s. "The Super Carrier interposer features multiple RDL layers on the front as well as on the backside of the interposer for yield and manufacturability," explained Li. "We can also integrate various passive components in the interpreter for performance. This six reticle-size CoWoS-L will be qualified in 2025" Building 5148 mm2 SiPs is an extremely tough tasks and we can only wonder how much they will cost and how much their developers will charge for them. At present NVIDIA's H100 accelerator, whose packaging spans an interposer multiple reticles in size, costs around $30,000. So a considerable larger and more powerful chip would likely push prices higher still. But paying for the cost of large processors will not be the only huge investments that data center operators will need to make. The amount of active silicon that 5148 mm2 SiPs can house will almost certainly result in some of the most power-hungry HPC chips produced yet – chips that will also need equally powerful liquid cooling to match. To that end, TSMC has disclosed that it has been testing on-chip liquid cooling technology, stating that it has managed to cool down silicon packages with power levels as high as 2.6 kW. So TSMC does have some ideas in mind to handle the cooling need of these extreme chips, if only at the price of integrating even more cutting-edge technology.
- TSMC Details N4X Process for HPC: Extreme Performance at Minimum Leakageon 26. maj 2023 at 17:00
At its 2023 Technology Symposium TSMC revealed some additional details about its upcoming N4X technology that is designed specifically for high-performance computing (HPC) applications. This node promises to enable ultra-high performance and improve efficiency while maintaining IP compatibility with N4P (4 nm-class) process technology. "N4X truly sets a new benchmark for how we can push extreme performance while minimizing the leakage power penalty," said Yujun Li, TSMC's director of business development who is in charge of the foundry's High Performance Computing Business Division. TSMC's N4X technology belongs to the company's N5 (5 nm-class) family, but it is enhanced in several ways and is optimized for voltages of 1.2V and higher in overdrive mode. To achieve higher performance and efficiency, TSMC's N4X improves transistor design in three three key areas. Firstly, they refined their transistors to boost both processing speed and drive currents. Secondly, the foundry incorporated its new high-density metal-insulator-metal (MiM) capacitors, to provide reliable power under high workloads. Lastly, they modified the the back-end-of-line metal stack to provide more power to the transistors. In particular, N4X adds four new devices on top of the N4P device offerings, including ultra-low-voltage transistors (uLVT) for applications that need to be very efficient, and extremely-low threshold voltage transistors (eLVT) for applications that need to work at high clocks. For example, N4X uLVT with overdrive offers 21% lower power at the same speed when compared to N4P eLVT, whereas N4X eLVT in OD offers 6% higher speed for critical paths when compared to N4P eLVT. Advertised PPA Improvements of New Process Technologies Data announced during conference calls, events, press briefings and press releases TSMC N5 vs N7 N5P vs N5 N5HPC vs N5 N4 vs N5 N4P vs N5 N4P vs N4 N4X vs N5 N4X vs N4P N3 vs N5 Power -30% -10% ? lower -22% - ? ? -25-30% Performance +15% +5% +7% higher +11% +6% +15% or more +4% or more +10-15% Logic Area Reduction % (Density) 0.55x -45% (1.8x) - - 0.94x -6% 1.06x 0.94x -6% 1.06x - ? ? 0.58x -42% (1.7x) Volume Manufacturing Q2 2020 2021 Q2 2022 2022 2023 H2 2022 H1 2024? H1 2024? H2 2022 While N4X offers significant performance enhancements compared to N4 and N4P, it continues to use the same SRAM, standard I/O, and other IPs as N4P, which enables chip designers to migrate their designs to N4X easily and cost effectively. Meanwhile, keeping in mind N4X's IP compatibility with N4P, it is logical to expect transistor density of N4X to be more or less in line with that of N4P. Though given the focus of this technology, expect chip designers to use this technology to get extreme performance rather than maximum transistor density and small chip dimensions. TSMC claims that N4X has achieved its SPICE model performance targets, so customers can start using the technology today for their HPC designs that will enter production sometimes next year. For TSMC, N4X is an important technology as HPC designs are expected to be the company's main revenue growth driver in the coming years. The contract maker of chips anticipates HPC to account for 40% of its revenue in 2030 followed by smartphones (30%) and automotive (15%) applications.
- NVIDIA Reports Q1 FY2024 Earnings: Bigger Things to Come as NV Approaches $1T Market Capon 25. maj 2023 at 13:00
Closing out the most recent earnings season for the PC industry is, as always, NVIDIA. The company’s unusual, nearly year-ahead fiscal calendar means that they get the benefit of being casually late in reporting their results. And in this case, they’ve ended up being the proverbial case of saving the best for last. For the first quarter of their 2024 fiscal year, NVIDIA booked $7.2 billion in revenue, which is a 13% drop over the year-ago quarter. Like the rest of the chip industry, NVIDIA has been weathering a significant slump in demand for computing products over the past few quarters, which in turn has dented NVIDIA’s revenue and profitability. However, while NVIDIA’s consumer-focused gaming division has continued to take matters on the chin, the strong performance of NVIDIA’s data center group has kept the company as a whole fairly profitable, with the most recent quarter setting a segment record and helping NVIDIA to avoid the tough financial situations faced by rivals AMD and Intel. NVIDIA Q1 FY2024 Financial Results (GAAP) Q1 FY2024 Q4 FY2023 Q1 FY2023 Q/Q Y/Y Revenue $7.2B $6.1B $8.3B +19% -13% Gross Margin 64.6% 63.3% 65.5% +1.3ppt -0.9ppt Operating Income $2.1B $1.3B $1.9B +70% +15% Net Income $2.0B $1.4B $1.6B +44% +26% EPS $0.82 $0.57 $0.64 +44% +28% To that end, while Q1’FY24 was not by any means a record quarter for NVIDIA, it was still a relatively strong one for the company. NVIDIA’s net income of $2 billion makes for one of their better quarters in that regard, and it’s actually up 26% year-over-year despite the revenue drop. That said, reading between the lines will find that NVIDIA paid their Arm acquisition breakup fee last year (Q1’FY23), so NVIDIA’s GAAP net income looks a bit better than it otherwise would; while non-GAAP net income would be down 21%. Meanwhile, NVIDIA’s gross margins have held strong in the most recent quarter, with NVIDIA posting a GAAP gross margin of 64.6%. But even a solid quarter during an industry slump is arguably not the biggest news to come out of NVIDIA’s most recent earnings report. Rather, it’s the company’s projections for Q2’FY24. In short, NVIDIA is expecting revenue to explode in Q2, with the company forecasting $11 billion in sales. Should it come to fruition, such a quarter would blow well past NVIDIA’s previous revenue records – and shattering Wall Street expectations. As a result, NVIDIA’s stock has already taken off in overnight trading, and by the time the market opens a bit later this morning, NVIDIA is expected to be a $930B+ company, knocking on the door of crossing a market capitalization of a trillion dollars.
- TSMC: We Have Working CFET Transistors in the Lab, But They Are Generations Awayon 25. maj 2023 at 12:00
Offering an update on its work with complementary field-effect transistors (CFETs) as part of the company's European Technology Symposium 2023, TSMC has revealed that it has working CFETs within its labs. But even with the progress TSMC has made so far, the technology is still in its early days, generations away from mass production. In the meantime, ahead of CFETs will come gate-all-around (GAA) transistors, which TSMC will be introducing with its TSMC's upcoming N2 (2nm-class) production nodes. One of TSMC's long-term bets as the eventual successor to GAAFETs, CFETs are expected to offer advantages over GAAFETs and FinFETs when it comes to power efficiency, performance, and transistor density. However, these potential benefits are theoretical and dependent on overcoming significant technical challenges in fabrication and design. In particular, CFETs are projected to require the usage of extremely precise lithography (think High NA EUV tools) to integrate both n-type and p-type FETs into a single device, as well as determining the most ideal materials to ensure appropriate electronic properties. Just like other chip fabs, TSMC is working on a variety of transistor design types, so having CFETs working in the lab is important. But it's also not something that is completely unexpected; researchers elsewhere have previously assembled CFETs, so now it's up to industry-focused TSMC to figure out how to bring about mass production. To that end, TSMC is stressing that CFETs are not in the near future. "Let me make a clarification on that roadmap, everything beyond the nanosheet is something we will put on our [roadmap] to tell you there is still future out there," said Kevin Zhang, senior vice president at responsible for technology roadmap, business strategy. "We will continue to work on different options. I also have the add on to the one-dimensional material-[based transistors] , all of those are being researched on being investigated on the future potential candidates right now, we will not tell you exactly the transistor architecture will be beyond the nanosheet." Indeed, research projects take a long time and when you are running many of them in parallel, you never know which of them comes to fruition. Even at that point, it is hard to tell which of potential structure candidates TSMC (or any other fabs) will choose, Ultimately, fabs have to meet the needs of their larger customers (e.g., Apple, AMD, MediaTek, Nvidia, Qualcomm) at the time when this production node is ready for high volume manufacturing. To that end, TSMC is going to use GAA structures for years to come, according to Zhang. "Nanosheet is starting at 2nm, it is reasonable to project and that nanosheet will be used for at least a couple of generations, right," asked Zhang rhetorically. "So, if you think about CFETs, we've leveraged [FinFETs] for five generations, which is more than 10 years. Maybe [device structure] is somebody else's problem to worry, then you can continue to write a story." Source; TSMC European Technology Symposium 2023
- Corsair Launches 2000D Airflow SFF Cases For Triple-Slot GPUson 24. maj 2023 at 21:00
Corsair has expanded the brand's mini-ITX case lineup with the new 2000D Airflow series. The 2000D Airflow and 2000D RGB Airflow small-form-factor (SFF) cases cater specifically to compact but high-performance systems. With a volume of 24.4 liters, the Corsair 2000D series cases have enough landscape to house the most demanding hardware, including a 360 mm AIO CPU liquid cooler and full-size graphics cards up to a triple-slot design. The 2000D Airflow is available with and without RGB-lit fans and in white or black colors. Therefore, the case comes in four different variants. Regardless, the 2000D Airflow is a mini-ITX case that prioritizes airflow for the components housed inside. For this same reason, Corsair designs the 2000D Airflow with removable steel mesh front, side, and rear panels for maximum ventilation from all directions. The case measures 18.03 x 10.67 x 7.87 inches and weighs just under 10 pounds. As a result, it doesn't require much space whether users decide to put it on or under the desk. Being an SFF case, the 2000D Airflow only accepts mini-ITX motherboards. The 2000D Airflow can accommodate up to eight 120 mm and two 140 mm cooling fans, doing the case's name justice. If a user fits the 2000D Airflow with a single-slot graphics card, it opens the possibility of cooling the graphics card with two additional fan mounts. For CPU air cooling enthusiasts, the 2000D Airflow supports coolers with a maximum height of up to 6.69 inches. Given the generous amount of fan mounts, Corsair's SFF case offers plentiful liquid cooling options. It supports 120 mm, 140 mm, 240 mm, 280 mm, and 360 mm radiators. Users can fit up to multiple radiators with an example combination of a 360 mm unit on the side and a 240 mm one at the rear in a scenario with a single-slot graphics card. Gallery: Corsair 2000D Airflow The 2000D Airflow has three case expansion slots, accommodating beefy graphics cards with up to three PCI slots in a vertical orientation. Consumers will have no problem fitting a GeForce RTX 4090 into the 2000D Airflow. However, they must ensure the graphics card is shorter than 14.37 inches since that's the maximum length permitted inside the 2000D Airflow. Storage options, however, are limited to three 2.5-inch drives, whether SSDs or hard drives, with the 2000D Airflow. In addition, one of the case's caveats is that it only accepts SFX or SFL-L power supplies, reducing options to units with a length of up to 5.12 inches. Nevertheless, Corsair aficionados will have no issues finding an adequate unit within the brand's ecosystem since the company offers the SF series and SF-L series with capacities varying from 600 watts to 750 watts on the former and 850 watts to 1,000 watts on the latter. Regarding the I/O design, the 2000D Airflow offers one USB 3.2 Gen 2 Type-C port, two USB 3.2 Gen 1 Type-A ports, and one 3.5 mm audio jack on the front panel. The 2000D Airflow retails for $139.99. On the other hand, the 2000D RGB Airflow, which has three pre-installed Corsair AF120 RGB Slim fans in the front intake, will set consumers back $199.99. Corsair backs its 2000D Airflow cases with a two-year warranty. In the case of the RGB variant, the AF120 RGB Slim fans come with a three-year warranty.
- AMD Launches Zen 2-based Ryzen and Athlon 7020C Series For Chromebookson 23. maj 2023 at 13:00
Last year, AMD unveiled their entry-level 'Mendicino' mobile parts to the market, which combine their 2019 Zen 2 cores and their RDNA 2.0 integrated graphics to create an affordable selection of configurations for mainstream mobile devices. Although much of the discussion over the last few months has been about their Ryzen 7040 mobile parts, AMD has launched four new SKUs explicitly designed for the Chromebook space, the Ryzen and Athlon 7020C series. Some of the most notable features of AMD's Ryzen/Athlon 7020C series processors for Chromebooks include three different configurations of cores and threads, ranging from entry-level 2C/2T up to 4C/8T, all with AMD's RDNA 2-based Radeon 610M mobile integrated graphics. Designed for a wide variety of tasks and users, including and not limited to consumers, education, and businesses, AMD's Ryzen 7020C series looks to offer similar specifications and features to their regular 7020 series mobile parts but expands things to the broader Chromebook and ChromeOS ecosystem too.
- Micron Expects Impact as China Bans Its Products from 'Critical' Industrieson 23. maj 2023 at 12:00
In the latest move in the tit-for-tat technology trade war between the United States and China, on Sunday the Cyberspace Administration of China announced that it was effectively banning Micron's products from being purchased in the country going forward. Citing that Micron's products have failed to pass its cybersecurity review requirements, the administration has ordered that operators of key infrastructure should stop buying products containing chips from the U.S.-based company. "The review found that Meiguang's products have serious hidden dangers of network security problems, which cause major security risks to China's key information infrastructure supply chain and affect China's national security," a statement by CAC reads. "Therefore, the Cyber Security Review Office has made a conclusion that it will not pass the network security review in accordance with the law. According to the Cyber Security Law and other laws and regulations, operators of key information infrastructure in China should stop purchasing Micron's products." The CAC statement does not elaborate on the nature of 'hidden dangers' and about the risks they pose. Furthermore, the agency did not detail which companies are considered as 'operators of key information infrastructure,' though we can speculate that these are telecommunication companies, government agencies, cloud datacenters serving socially important clients, and a variety of other entities that may deem crucial for the society or industries. For U.S.-based Micron, while the Chinese market is a minor one overall, it's not so small to be inconsequential. China and Hong Kong represent some 25% of Micron's revenues, so the drop in sales is expected to have an impact on Micron's financials. "As we have disclosed in our filings, China and Hong Kong headquartered companies represent about 16% of our revenues," said Mark Murphy, Chief Financial Officer at Micron, at the 51st Annual J.P. Morgan Global Technology, Media and Communications Conference. "In addition, we have distributors that sell to China headquartered companies. We estimate that the combined direct sales and indirect sales through distributors to China headquartered companies is about a quarter of our total revenue." The trade war implications aside, the 'key information infrastructure' wording of the government order leaves unclear for now on just how wide the Micron ban will be. Particularly, whether Micron's products will still be allowed to be imported for rank-and-file consumer goods. Many of Micron's Chinese clients assemble PCs, smartphones, and other consumer electronics sold all around the world, so the potential the impact on Micron's sales could be significantly lower than 25% of its revenue so long as they are allowed to continue using Micron's parts. "We are evaluating what portion of our sales could be impacted by a critical information infrastructure ban," Murphy added. "We are currently estimating a range of impact in the low single digits percent of our company total revenue at the low end and high single-digit percentage of total company revenue at the high end." The decision CAC decision comes after the U.S. government barred Chinese chipmakers from buying advanced wafer fab equipment, which is going to have a significant impact on China-based SMIC and YMTC, and years after the U.S. government implemented curbs that essentially drove one of China's emerging DRAM makers out of business. Officially, whether or not the CAC decision has been influenced by the sanctions against Chinese companies by the U.S. government is an unanswered question, but as the latest barb between the two countries amidst their ongoing trade war, it's certainly not unprecedented. Sources: Micron, Reuters, SeekingAlpha, CAC.
- Intel HPC Updates For ISC 2023: Aurora Nearly Done, More Falcon Shores, and the Future of XPUson 22. maj 2023 at 16:45
With the annual ISC High Performance supercomputing conference kicking off this week, Intel is one of several vendors making announcements timed with the show. As the crown jewels of the company’s HPC product portfolio have launched in the last several months, the company doesn’t have any major new silicon announcements to make alongside this year’s show – and unfortunately Aurora isn’t yet up and running to take a shot at the Top 500 list. So, following a tumultuous year thus far that has seen significant shifts in Intel’s GPU roadmap in particular, the company is using ISC to recompose itself and use the backdrop of the show to lay out a fresh roadmap for HPC customers. Most notably, Intel is using this opportunity to better explain some of the hardware development decisions the company has made this year. That includes Intel’s pivot on Falcon Shores, transforming it from XPU into a pure GPU design, as well to a few more high-level details of what will eventually become Intel’s next HPC-class GPU. Although Intel would clearly be perfectly happy to keep selling CPUs, the company has (and continues to) realign for a diversified market where their high-performance customers need more than just CPUs.
- Kioxia BG6 Series M.2 2230 PCIe 4.0 SSD Lineup Adds BiCS6 to the Mixon 22. maj 2023 at 13:00
Kioxia's BG series of M.2 2230 client NVMe SSDs has proved popular among OEMs and commercial system builders due to their low cost and small physical footprint. Today, the company is introducing a new generation of products in this postage stamp-sized lineup. The BG6 series builds up on the Gen 4 support added in the BG5 by updating the NAND generation from BiCS5 (112L) to BiCS6 (162L) for select capacities. The increase in per-die capacity now allows Kioxia to bring 2TB M.2 2230 SSDs into the market. While the BG5 series came in capacities of up to 1TB, the BG6 series adds a 2TB SKU. However, the NAND generation update is only reserved for the 1TB and 2TB models. The BG series of SSDs from Kioxia originally started out as a single-chip solution for OEMs either in a BGA package or a M.2 2230 module. The appearance of PCIe 4.0 and its demands for increased thermal headroom resulted in Kioxia getting rid of the single-chip BGA solution starting with the BG5 introduced in late 2021. The BG6 series continues the DRAMless strategy and dual-chip design (separate controller and flash packages) of the BG5. While the performance numbers for the BG5 strictly placed it in the entry-level category for PCIe 4.0 SSDs, the update to the NAND has now amplified the performance to accepted mainstream levels for this segment. The DRAMless nature and use of the system DRAM (host memory buffer - HMB) for storing the flash translation layer (FTL) handicaps the performance slightly, preventing it from reaching high-end specifications. However, this translates to lower upfront cost and better thermal performance / lowered cooling costs - which are key constraints for OEMs and pre-built system integrators. Kioxia BG6 SSD Specifications Capacity 256 GB 512 GB 1 TB 2 TB Form Factor M.2 2230 or M.2 2280 Interface PCIe Gen4 x4, NVMe 1.4c NAND Flash 112L BiCS5 3D TLC 162L BiCS6 3D TLC Sequential Read ? MB/s ? MB/s 6000 MB/s 6000 MB/s Sequential Write ? MB/s ? MB/s 5000 MB/s 5300 MB/s Random Read ? IOPS ? IOPS 650K IOPS 850K IOPS Random Write ? IOPS ? IOPS 900K IOPS 900K IOPS Power Active ? W ? W ? W ? W Idle ? mW ? mW ? mW ? mW The company is focusing on the 1TB and 2TB SKUs with BG6 due to higher demand for those capacities in the end market. The 256GB and 512GB variants are under development. While the M.2 2230 form-factor is expected to be the mainstay, Kioxia is also planning to sell single-sided M.2 2280 versions for systems that do not support M.2 2230 SSDs. In addition to client systems, Kioxia also expects the BG6 SSDs to be used as boot drives in servers and storage arrays. Towards this, a few features that are not considered essential for consumer SSDs (such as support for NVMe 1.4c specifications including interfacing over SMBus for tigher thermal management, encryption using TCG Pyrite / Opal, power loss notification for protection against forced shutdowns, and platform firmware recovery) are included. The availability of performance numbers for the 1TB SKU allows us to note that the BG6 has more than 1.7x the sequential performance numbers of the BG5, and random reads are 1.3x better, while random write performance has doubled. These are obviously fresh out-of-the-box numbers (as typical of specifications for consumer / client SSDs). Power consumption numbers were not made available at the time of announcement [Update: Kioxia indicated that the finalized specifications (inclusive of power numbers) should become available in July]. Kioxia will be sampling the drives to OEMs and system integrators in the second half of the year. Systems equipped with these drives can be expected in the hands of consumers for the holiday season or early next year. Pricing information was not provided as part of the announcement, but Kioxia is demonstrating these at the Dell Technologies World 2023 being held in Las Vegas from May 22 - 25.
- Micron to Bring EUV to Japan: 1γ Process DRAM to Be Made in Hiroshima in 2025on 19. maj 2023 at 13:00
Micron this week officially said that it would equip its fab in Hiroshima, Japan, to produce DRAM chips on its 1γ (1-gamma) process technology, its first node to use extreme ultraviolet lithography, in 2025. The company will be the first chipmaker to use EUV for volume production in Japan and its fabs in Hiroshima and Taiwan will be its first sites to use the upcoming 1γ technology. As the only major DRAM maker that has not adopted extreme ultraviolet lithography, Micron planned to start using it with its 1γ process (its 3rd Generation 10nm-class node) in 2024. But due to PC market slump and its spending cuts, the company had to delay the plan to 2025. Micron's 1γ process technology is set to use EUV for several layers, though it does not disclose how many layers will use it. What the company does say is that its 1γ node will enable the world's smallest memory cell, which is bold claim considering the fact that Micron cannot possibly know what its rivals are going to have in 2025. Last year the 1-gamma technology was at the 'yield enablement' stage, which means that the company was testing samples of DRAMs extensive testing and quality control procedures. At this point, the company may implement innovative inspection to tools to identify defects and then introduce certain improvements to certain process steps (e.g., lithography, etching) to maximize yields. “Micron’s Hiroshima operations have been central to the development and production of several industry-leading technologies for memory over the past decade,” Micron President and CEO Sanjay Mehrotra said. “We are proud to be the first to use EUV in Japan and to be developing and manufacturing 1-gamma at our Hiroshima fab. To produce memory chips on its 1-gamma node at its Hiroshima fab, Micron needs to install ASML's Twinscan NXE scanners, which cost about $200 million per unit. To equip its fab with advanced tools, Micron secured ¥46.5 billion ($320 million) grant from the Japanese government last September. Meanwhile, Micron says it will invest ¥500 billion ($3.618 billion) in the technology 'over the next few years, with close support from the Japanese government.' “Micron is the only company that manufactures DRAM in Japan and is critical to setting the pace for not only the global DRAM industry but our developing semiconductor ecosystem,” said Satoshi Nohara, METI Director-General of the Commerce and Information Policy Bureau. “We are pleased to see our collaboration with Micron take root in Hiroshima with state-of-the-art EUV to be introduced on Japanese soil. This will not only deepen and advance the talent and infrastructure of our semiconductor ecosystem, it will also unlock exponential growth and opportunity for our digital economy.”
- Samsung Kicks Off DDR5 DRAM Production on 12nm Process Tech, DDR5-7200 in the Workson 18. maj 2023 at 23:00
Samsung on Thursday said it had started high volume production DRAM chips on its latest 12nm fabrication process. The new manufacturing node has allowed Samsung to reduce the power consumption of its DRAM devices, as well as decrease their costs significantly compared to its previous-generation node. According to Samsung's announcement, the company's 12nm fabrication process is being used to produce 16Gbit DDR5 memory chips. And while the company is already producing DDR5 chips with that capacity (e.g. K4RAH086VB-BCQK), the switch to the newer and smaller 12nm process has paid off both in terms of power consumption and die size. As compared to DDR5 dies made on the company's previous-generation node (14nm), the new 12nm dies offer up to 23% lower power consumption, and Samsung is able to produce 20% more dies per wafer (i.e., the DDR5 dies are tangibly smaller). Samsung says that the key innovation of its 12nm DRAM fabrication process is usage of new high-k material for DRAM cell capacitors that enabled it to increase cell's capacitance to boost performance, but without increasing their dimensions and die sizes. Higher DRAM cell capacitance means a DRAM cell can store more data and reduce power-draining refresh cycles, hence increasing performance. However, larger capacitors typically result in increased cell and die size, which makes the resulting dies more expensive. DRAM makers have been addressing this by using high-k materials for years, but finding these materials is getting trickier with each new node as memory makers also have to take into account yields and production infrastructure they have. Apparently, Samsung has succeeded in doing so with its 12nm node, though it does not make any disclosures on the matter. That Samsung has succeeded in reducing their die size by a meaningful amount at all is quite remarkable, as analog components like capacitors were some of the first parts of chips to stop scaling down further with finer process nodes. In addition to introducing a new high-k material, Samsung also reduced operating voltage and noise for its 12nm DDR5 ICs to offer a better balance of performance and power consumption compared to predecessors. One of the aspects about Samsung's 12nm DRAM technology is that it looks to be the company's 3rd Generation production node for memory that uses extreme ultraviolet lithography. The first D1x node was purely designed as a proof of concept and its successor D1a, which has been in use since 2021, used EUV for five layers. Meanwhile, it is unclear to what degree Samsung's 12nm node is using EUV tools. "Using differentiated process technology, Samsung’s industry-leading 12nm-class DDR5 DRAM delivers outstanding performance and power efficiency," said Jooyoung Lee, Executive Vice President of DRAM Product & Technology at Samsung Electronics. Meanwhile, Samsung is also eyeing faster memory speeds with their new 12nm DDR5 dies. According to the company, these dies can run as fast as DDR5-7200 (i.e. 7.2Gbps/pin), which is well ahead of what the official JEDEC specification currently allows for. The voltage required isn't being stated, but if nothing else, it offers some promise for future XMP/EXPO memory kits.
- Voltage Lockdown: Investigating AMD's Recent AM5 AGESA Updates on ASRock's X670E Taichion 16. maj 2023 at 16:00
It's safe to say that the last couple of weeks have been a bit chaotic for AMD and its motherboard partners. Unfortunately, it's been even more chaotic for some users with AMD's Ryzen 7000X3D processors. There have been several reports of Ryzen 7000 processors burning up in motherboards, and in some cases, burning out the chip socket itself and taking the motherboard with it. Over the past few weeks, we've covered the issue as it's unfolded, with AMD releasing two official statements and motherboard vendors scrambling to ensure their users have been updating firmware in what feels like a grab-it-quick fire sale, pun very much intended. Not everything has been going according to plan, with AMD having released two new AGESA firmware updates through its motherboard partners to try and address the issues within a week. The first firmware update made available to vendors, AGESA 1.0.0.6, addressed reports of SoC voltages being too high. This AGESA version put restrictions in place to limit that voltage to 1.30 V, and was quickly distributed to all of AMD's partners. More recently, motherboard vendors have pushed out even newer BIOSes which include AMD's AGESA 1.0.0.7 (BETA) update. With even more safety-related changes made under the hood, this is the firmware update AMD and their motherboard partners are pushing consumers to install to alleviate the issues – and prevent new ones from occurring. In this article, we'll be taking a look at the effects of all three sets of firmware (AGESA 1.0.0.5c - 7) running on our ASRock X670E Taichi motherboard. The goal is to uncover what, if any, changes there are to variables using the AMD Ryzen 9 7950X3D, including SoC voltages and current drawn under intensive memory based workloads.
- Solidigm D5-P5430 Addresses QLC Endurance in Data Center SSDson 16. maj 2023 at 14:10
Solidigm has been extremely bullish on QLC SSDs in the data center. Compared to other flash vendors, their continued use of a floating gate cell architecture (while others moved on to charge trap configurations) has served them well in bringing QLC SSDs to the enterprise market. The company realized early on that the market was hungry for a low-cost high-capacity SSD to drive per-rack capacity. In order to address this using their 144L 3D NAND generation, Solidigm created the D5-P5316. While the lineup did include a 30TB SKU for less than $100/TB, the QLC characteristics in general, and the use of a 16KB indirection unit (IU) resulted in limiting the use-cases to read-heavy and large-sized sequential / random write workloads. Solidigm markets their data center SSDs under two families - the D7 line is meant for demanding workloads with 3D TLC flash. The D5 series, on the other hand, uses QLC flash and targets mainstream workloads and specialized non-demanding use-cases where density and cost are more important. The company further segments this family into the 'Essential Endurance' and 'Value Endurance' line. The popular D5-P5316 falls under the 'Value Endurance' line. The D5-P5430 being introduced today is a direct TLC replacement drive in the 'Essential Endurance' line. This means that, unlike the D5-P5316's 16K IU, the D5-P5430 uses a 4KB IU. The company had provided an inkling of this drive in their Tech Field Day presentation last year. Despite being a QLC SSD, Solidigm is promising very competitive read performance and higher endurance ratings compared to previous generation TLC drives from its competitors. In fact, Solidigm believes that the D5-P5430 can be quite competitive against TLC drives like the Micron 7450 Pro and Kioxia CD6-R. Solidigm D5-P5430 NVMe SSD Specifications Aspect Solidigm D5-P5430 Form Factor 2.5" 15mm U.2 / E3.S / E1.S Interface, Protocol PCIe 4.0 x4 NVMe 1.4c Capacities 3.84 TB, 7.68 TB, 15.36 TB E1.S / U.2 / E3.S 30.72 TB U.2 / E3.S 3D NAND Flash Solidigm 192L 3D QLC Sequential Performance (GB/s) 128KB Reads @ QD 256 7.0 128KB Writes @ QD 256 3.0 Random Access (IOPS) 4KB Reads @ QD 256 971K 4KB Writes @ QD 256 120K Latency (Typical) (us) 4KB Reads @ QD 1 108 4KB Writes @ QD 1 13 Power Draw (Watts) 128KB Sequential Read ?? 128KB Sequential Write 25.0 4KB Random Read ?? 4KB Random Write ?? Idle 5.0 Endurance (DWPD) 100% 128KB Sequential Writes 1.83 100% 4KB Random Write 0.58 Warranty 5 years Based on market positioning, the Micron 6500 ION launched earlier today is the main competition for the D5-P5430. The sequential writes and power consumption numbers are not particularly attractive for the Solidigm drive on a comparative basis, but the D5-P5430 does win out on the endurance aspect - 0.3 RDWPD for the 6500 ION against 0.58 RDWPD for the D5-P5430 (surprising for a QLC drive). Solidigm prefers total NAND writes limit as a better estimtate of endurance and quotes 32 PBW as the endurance rating for the D5-P5430's maximum capacity SKU. Another key aspect here is that the D5-P5430 is only available in capacities up to 15.36 TB today. The 30 TB SKU is slated to appear later this year. In comparison, the 30 TB SKU for the 6500 ION is available now. On the other hand, the D5-P5430 is available in a range of capacities and form-factors, unlike the 6500 ION. The choice might just end up being dependent on how each SSD performs for the intended use-cases.
- Micron Updates Data Center NVMe SSD Lineup: 6500 ION TLC and XTR SLCon 16. maj 2023 at 13:00
Micron is expanding its data center SSD lineup today with the introduction of two new products - the 6500 ION and the XTR NVMe SSDs. These two products do not fall into any of their existing enterprise SSD lineups. They are meant to fill holes in their product stack for high-capacity and high-endurance offerings. While the Micron 6500 ION is a TLC drive with QLC pricing, the XTR NVMe SSD is a SLC offering. Read on for a closer look at the specifications and market positioning of the two products.
- Asus Formally Unveils ROG Ally Portable Console: Eight Zen 4 Cores and RDNA 3 GPU in Your Handson 12. maj 2023 at 18:30
Asus on Thursday officially introduced the ROG Ally, its first handheld gaming PC. With numerous handheld gaming systems around, most notably Steam Deck, Asus needed something special to be successful and fulfill the promise of the ROG brand. To that end, the ROG Ally promises a unique combination of performance enabled by AMD's latest mobile CPU, high compatibility due to usage of Windows 11, portability, and other features. Performance: To Extreme, or Not to Extreme? First teased by Asus last month, the ROG Ally is the company's effort to break into the handheld gaming PC space, which Valve has essentially broken open in the past year with the Steam Deck. When developing ROG Ally, Asus wanted to build a no-compromise machine that would bring the performance of mobile PCs the portability that comes with handheld device. This is where AMD's recently-launched Zen 4-based Ryzen Z1 and Ryzen Z1 Extreme SoCs, which are aimed specifically at ultra-portable devices, come into play. Based on AMD's 4nm Phoenix silicon, the eight-core Ryzen Z1 Extreme processor and its 12 CU RDNA 3-based GPU resembles the company's Ryzen 7 7840U CPU. Meanwhile Asus is also offering a version of Ally using the lower-tier Z1 chip, which still uses eight CPU cores and pairs that with a 4 CU GPU. On paper, the Z1 Extreme chip is significantly more powerful in graphics tasks as a result (~3x), however in practice the chips are closer, as thermal and memory bandwidth limits keep the Extreme chip from running too far ahead. Speaking of graphics performance, it should be noted that Asus's ROG Ally console is equipped with the ROG XG Mobile connector (a PCIe 3.0 x8 for data and a USB-C for power and USB connections) that can be used to connect an Asus ROG XG Mobile eGFX dock with the handheld. The XG docks come with a range of GPUs installed, up to a GeForce RTX 4090 Laptop GPU. The XG dock essentially transforms ROG Ally into a high-performance gaming system, albeit by supplanting much of its on-board functionality. The fact that Asus offers eGFX capability right out-of-box is a significant feature differentiator for the ROG Ally, though be prepared to invest the $1999.99 if you want the top-end GeForce RTX 4090 Laptop-equipped XG dock. Both versions of ROG Ally will come with 16GB of LPDDR5-6400 memory and a 512GB SSD in an M.2-2230 form-factor with a PCIe 4.0 interface. While replacing the M.2 drive is reportedly a relatively easy task, for those who want to expand storage space without opening anything up, the console also has an UHS-II-compliant microSD card slot. Display: Full-HD at 120 Hz The ROG Ally is not only the first handheld with the Ryzen Z1 Extreme CPU, but will also be among the first portable game consoles with a 1920x1080 resolution 7-inch display; and one that supports a maximum refresh rate of 120 Hz, no less. The Gorilla Glass Victus-covered display uses an IPS-class panel with a peak luminance of 500 nits as well as Dolby Vision HDR support to make games more appealing. In addition to Dolby Vision HDR-badged display, the Asus ROG Ally also has Dolby Atmos-certified audio subsystem with Smart Amp speakers and noise cancelation technology. Ergonomics: 600 Grams and All the Controls When it comes to mobile devices, ergonomics is crucial. Yet, it is pretty hard to design a handheld game console that essentially uses laptop-class silicon with all of its peculiarities. When Asus began work on its ROG Ally, it asked mobile gamers about what they think was the most important feature for their portable console and apparently it was weight. So Asus set about deigning a device that would weigh around 600 grams and would be comfortable to use. "When we go through survey with our focus group, the number one thing that they wanted was a balanced weight handheld device," said Shawn Yen, vice president of Asus's Gaming Business Unit responsible for ROG products. "The target was 600 grams because the current handheld devices in the market today are too heavy. It is not something that they can engage for a very long period of time. So, their game time got cut down because it is not comfortable. So, uh, when we first thought about the design target for ROG Ally, we were thinking about a device that can get into gamers' hands for hours of fun time." The display and chassis are among the heaviest components of virtually all mobile devices, so there is little that can be done about those. But in a bid to optimize the weight and distribute it across the device, the company had to implement a very well thought motherboard design, and use anti-gravity heat pipes to ensure proper cooling at all times without using too many of them as this increases weight. Meanwhile, Asus still had to use two fans and a radiator with 0.1 mm ultra-thin fins to ensure that the CPU is cooled down properly as it still can dissipate up to 30W of heat. To further optimize weight, Asus opted for a polycarbonate chassis. Since Asus ROG Ally is essentially a Windows 11-based PC albeit in a portable game console form factor, the company had to incorporate all the pads and buttons featured on conventional gamepads and some more controls for Windows (e.g., touchscreen) and ROG Ally-specific things like Armor Crate game launcher and two macro buttons. It's also worth noting that, seemingly because of the use of Windows 11, the Ally is not capable of consistently suspending games while it sleeps, a notable difference compared to other handheld consoles. Meanwhile, the trade-off to hitting their weight target while still using a relatively powerful SoC has been battery life. The Ally comes with a 40Wh batter, and Asus officially advertises the handheld as offering up to 2 hours of battery life in heavy gaming workloads. Early reviews, in turn, have matched this, if not coming in below 2 hours in some cases. The higher-resolution display and high-performance AMD CPU are both key differentiating factors of the Ally, but these parts come at a high power cost. Vast Connectivity Being a PC, the ROG Ally is poised to offer connectivity that one comes to expect from a portable computer. Therefore, the unit features a Wi-Fi 6E and Bluetooth adapter for connectivity, it includes a MicroSD card slot for additional storage, a USB Type-C port for both charging and display output, an ROG XG Mobile connector for external GPUs, and a TRRS audio connector for headsets. The Price The ROG Ally with AMD's Ryzen Z1 Extreme CPU is set to be launched globally on June 13, 2023, at a price point of $699.99. Meanwhile the non-extreme Z1 version of the Ally has been lited for $599.99, though no release date has been set. The first reviews are already out, so Asus is giving potential customers a long lead time to evaluate the console before it's released next month.
- Asus Unveils Two Slimmer GeForce RTX 4090 Video Cards: ROG Strix LC and TUF OGon 12. maj 2023 at 17:00
Asus has expanded the company's GeForce RTX 40-series product portfolio with two new RTX 4090 graphics cards. The ROG Strix LC GeForce RTX 4090 and TUF Gaming GeForce RTX 4090 OC, which are available in regular and OC editions, have arrived to compete in the high-end segment. What makes these cards notable, in turn, is their reduced size: the new cards are physically smaller than Asus' early RTX 4090 offerings, as well as many of the competitors on the market. The GeForce RTX 4090 is a 450W gaming graphics card, with large coolers to match. Even NVIDIA's hard-to-get GeForce RTX 4090 Founders Edition is a triple-slot graphics card, and air-cooled AIB cards tend to be larger still. So for the size-conscious gamer, this leaves liquid cooled cards, which brings us to Asus's new ROG Strix LC GeForce RTX 4090. The closed-loop card moves a lot of its bulk off to an attached 240 mm radiator block, bringing the card itself down to 2.6-slots wide. The ROG Strix LC GeForce RTX 4090's hybrid cooling system packs a cold plate that cools the large AD102 GPU and neighboring GDDR6X memory chips. The heat is transferred to the 240 mm radiator through 560 mm tubing, so there won't be an issue with large cases. A low-profile heatsink with a blower-style cooling fan keeps the other power delivery components cool. Meanwhile the radiator itself is equipped with a pair of 120 mm ARGB cooling fans are present to dissipate the heat once it gets there. Asus GeForce RTX 4090 Specifications AnandTech ROG Strix LC GeForce RTX 4090 TUF Gaming GeForce RTX 4090 OG TUF Gaming GeForce RTX 4090 Regular Edition Boost Clock (Default / OC) 2,520 MHz / 2,550 MHz 2,520 MHz / 2,550 MHz 2,520 MHz / 2,550 MHz OC Edition Boost Clock (Default / OC) 2,610 MHz / 2,640 MHz 2,565 MHz / 2,595 MHz 2,565 MHz / 2,595 MHz Display Outputs 2 x HDMI 2.1a 3 x DisplayPort 1.4a 2 x HDMI 2.1a 3 x DisplayPort 1.4a 2 x HDMI 2.1a 3 x DisplayPort 1.4a Design 2.6 Slot 3.2 Slot 3.65 Slot Power Connectors 1 x 16-pin 1 x 16-pin 1 x 16-pin Dimensions 293 x 133 x 52 mm 325.9 x 140.2 x 62.8 mm 348.2 x 150 x 72.6 mm Radiator Dimensions 272 x 121 x 54 mm N/A N/A Asus's other new RTX 4090 card, the air-cooled TUF Gaming GeForce RTX 4090 OG, is a unique case of its own. Technically, it's a new SKU; however, the graphics card reuses the TUF Gaming cooler from the TUF Gaming GeForce RTX 3090 Ti. This is notable because the TUF cooler used on the 3090 Ti was a good bit smaller than Asus's first RTX 4090 cooler. The net result is that these changes bring the new OG card's width from 3.65-slots (and arguably, wide enough that you need to leave a 4th slot open for air flow) down to 3.2 slots - just enough room for proper airflow if the neighboring 4th sot is occupied. Altogether, the OG model is smaller in every dimension, shaving off 6% of its height, 7% of its length, and 13% of its width. Asus doesn't list the weight of its graphics cards, so we cannot comment on whether the new OG version has lost weight. By most accounts, Asus's current RTX 4090 cooler is highly effective – it's just also really big. So offering a separate SKU with a smaller cooler makes a good deal of sense, especially given how popular NVIDIA's true triple-slot Founders Edition card has been. The smaller TUF cooler is rated for the same 450W TDP as the larger TUF 4090 cooler, but, as always, there may be performance/acoustic tradeoffs involved. There's one other change that Asus doesn't advertise with the TUF Gaming GeForce RTX 4090 OG. The renders on the product page show the graphics card with a longer PCB. One of the advantages of the more compact PCB on the previous model was that it permitted Asus (and NVIDIA) to vent heat out of the back side of the card, as well as to optimize the trace layouts and component placement. Meanwhile, with the longer PCB, Asus relocated the 16-pin power connector. Instead of being placed in the middle, the power connector is on the farther right side. Gallery: Asus GeForce RTX 4090 GPUs Between the two new cards, the ROG Strix LC GeForce RTX 4090 ends up with the edge in clockspeeds, flaunting boost clock speeds up to 2,640 MHz when in its highest performance mode. Meanwhile, the TUF Gaming GeForce RTX 4090 OG series have the same clock speeds as the vanilla models, with a rated boost clock of 2520 MHz stock and 2595 MHz when the OC card is in its highest mode. In addition, the ROG Strix LC GeForce RTX 4090 and TUF Gaming GeForce RTX 4090 OG have other attributes in common, including using a single 16-pin power connector and a display output layout consisting of two HDMI 2.1a ports and three DisplayPort 1.4a outputs. Asus hasn't revealed the pricing or availability of the new graphics cards. For reference, the TUF Gaming GeForce RTX 4090 and OC Edition retail for $1,599 and $1,799, respectively. The OG counterparts likely have similar price tags. Meanwhile, we'd expect the ROG Strix LC GeForce RTX 4090 to carry a more considerable premium due to the AIO liquid cooling design.
- Philips Reveals Dual Screen Display: a 24-Inch LCD with E Ink Secondary Screenon 11. maj 2023 at 18:00
Although E Ink technology has remained a largely niche display tech over the past decade, it's none the less excelled in that role. The electrophoretic technology closely approximates paper, providing significant power advantages versus traditional emissive displays, not to mention making it significantly easier on readers' eyes in some cases. And while the limitations of the technology make it unsuitable for use as a primary desktop display, Phillips thinks there's still a market for it as a secondary display. To that end, Philips this week has introduced their novel, business-oriented Dual Screen Display, which combines both an LCD panel and and E Ink panel into a single display, with the aim of capturing the benefits of both technologies. The Philips Dual Screen Display (24B1D5600/96) is a single display that integrates both a 23.8-inch 2560x1440 IPS panel as well as a 13.3-inch, greyscale 1200x1600 resolution E Ink display. With each display operating independently, the idea is similar to previous concepts of multi-panel monitors; however Phillips is taking things in a different direction by using an E Ink display as a second panel – combining two otherwise very different display technologies into a single product. By offering an E Ink panel in this product, Phillips is looking to court the market for users who would prefer the reduced eye strain of an E Ink display, but are working at a desktop computer, where an E Ink display would not be viable as a primary monitor. As you might expect from the basic layout of the monitor, the primary panel is a rather typical office display that's designed for video and productivity applications – essentially anything where you need a modern, full color LCD. The secondary E Ink display, on the other hand, is a greyscale screen whose strength is the lack of flicker that comes from not being backlit by a PWM light. Both screens act independently, but since they are encased into the same chassis, they are meant to work together. For example, the secondary monitor can display supplementary information in text form, whereas the primary monitor can display photos. Ultimately, Philips is pitching the display on the idea that the secondary screen can reduce the eye strain of the viewer while viewing documents. It's a simple enough concept, but one that requires buyers to overlook the trade-offs of E Ink, and the potential drawbacks of having two dissimilar displays directly next to each other. Under the hood, the LCD panel on the Deal Screen Display is an unremarkable office-grade display. Phillips is using 23.8-inch anti-glare 6-bit + Hi FRC IPS panel with a 2560x1440 resolution, which can hit a maximum brightness of 250 nits while delivering 178-degree viewing angles. Meanwhile, the E Ink panel is a 13.3-inch 4-bit greyscale electrophoretic panel, with a resolution of 1200x1600. Notably here, there is no backlighting; the E Ink panel is meant to be environmentally lit (e.g. office lighting) to truly minimize eye strain. When it comes to connectivity, the primary screen is equipped with a DisplayPort 1.2 and a USB Type-C input (with DP Alt mode and USB Power Delivery support), a USB hub, and a GbE adapter. Meanwhile, the secondary screen connects to host using a USB Type-C connector that also supports DP Alt Mode, and Power Delivery. Specifications of the Philips Dual Screen Display 24B1D5600/96 Primary Screen Secondary Screen Panel 27" IPS 6-bit + Hi FRC 13.3" E Ink 4-bit Native Resolution 2560 × 1440 1200 × 1600 Maximum Refresh Rate 75 Hz ? Response Time 4ms ? Brightness 250 cd/m² (typical) ? Contrast 1000:1 ? Viewing Angles 178°/178° horizontal/vertical high HDR none none Dynamic Refresh Rate none none Pixel Pitch 0.2058 mm² 0.2058 mm² Pixel Density 123 ppi 150 ppi Display Colors 16.7 million greyscale Color Gamut Support NTSC: 99% sRGB: 99% 4-bit Aspect Ratio 16:9 3:4 Stand Height: +/-100 mm Tilt: -5°/23° Swivel: 45° Inputs 1 × DisplayPort (HDCP 1.4) 1 × USB-C (HDCP 1.2 + PD) 1 × USB-C (HDCP 1.4 + PD) Outputs - - USB Hub USB 3.0 hub - Launch Date Q2 2023 The Philips Dual Screen Display has a rather sleek stand which can adjust height, tilt, and swivel. It makes the whole unit look like one monitor rather than like two separate screens. Though to be sure, the E Ink portion of the display can be angled independently from the LCD panel, allowing the fairly wide monitor to contour to a user's field of view a bit better. When it comes to pricing, Philips's Dual Screen Display is available in China for $850 (according to Liliputing), which looks quite expensive for a 24-inch IPS LCD and a 13.3-inch secondary screen. Though as this is a rather unique product, it is not surprising that it is sold at a premium.
- Samsung to Unveil Refined 3nm and Performance-Enhanced 4nm Nodes at VLSI Symposiumon 10. maj 2023 at 20:00
Samsung Foundry is set to detail its second generation 3 nm-class fabrication technology as well as its performance-enhanced 4 nm-class manufacturing process at the upcoming upcoming 2023 Symposium on VLSI Technology and Circuits in Kyoto, Japan. Both technologies are important for the contract maker of chips as SF3 (3GAP) promises to offer tangible improvements for mobile and SoCs, whereas SF4X (N4HPC) is designed specifically for the most demanding high-performance computing (HPC) applications. 2nd Generation 3 nm Node with GAA Transistors Samsung's upcoming SF3 (3GAP) process technology is an enhanced version of the company's SF3E (3GAE) fabrication process, and relies on its second-generation gate-all-around transistors – which the company calls Multi-Bridge-Channel field-effect transistors (MBCFETs). The node promises additional process optimizations, though the foundry prefers not to compare SF3 with SF3E. Compared to its direct predecessor, SF4 (4LPP, 4nm-class, low power plus), SF3 claims a 22% performance boost at the same power and complexity or a 34% power reduction at the same clocks and transistor count, as well as a 21% logic area reduction. Though it is unclear whether the company has achieved any scaling for SRAM and analogue circuits. In addition, Samsung claims that SF3 will provide additional design flexibility facilitated by varying nanosheet (NS) channel widths of the MBCFET device within the same cell type. Curiously, variable channel width is a feature of GAA transistors that has been discussed for years, so the way Samsung is phrasing it in context of SF3 might mean that SF3E does not support it. Samsung's Ealiest 4nm Node: SF4E (IEDM 2021) Thus far neither Samsung LSI, the conglomerate's chip development arm, nor other customers of Samsung Foundry have formally introduced a single highly-complex processor mass produced on SF3E/3GAE process technology. In fact, it looks like the only publicly-acknowledged application that uses the industry's first 3 nm-class fabrication process is a cryptocurrency mining chip, according to TrendForce. This is not particularly surprising as usage of Samsung's 'early' nodes is typically quite limited. By contrast, Samsung's 'plus' technologies are typically used by a wide range of customers, so the company's SF3 (3GAP) process is likely to see much higher volumes when it becomes available sometime in 2024. SF4X for Ultra-High-Performance Applications In addition to SF3, which is designed for a variety of possible use cases, Samsung Foundry is prepping its SF4X (4HPC, 4 nm-class high-performance computing) designed for performance-demanding applications like datacenter-oriented CPUs and GPUs. To address such chips, Samsung's SF4X offers a performance boost of 10% coupled with a 23% power reduction. Samsung doesn't explicitly specify what process node that comparison is being made against, but presumably, this is against their default SF4 (4LPP) fabrication technology. To achieve this, Samsung redesigned transistors' source and drain after reassessing their stresses (presumably under high loads), performed further transistor-level design-technology co-optimization (T-DTCO), and introduced a new middle-of-line (MOL) scheme. The new MOL enabled SF4X to offer a silicon-proven CPU minimum voltage (Vmin) of 60mV, a 10% decrease in the variation of off-state current (IDDQ), guaranteed high voltage (Vdd) operation at over 1V without performance degradation, and an improved SRAM process margin. Samsung's SF4X will be a rival for TSMC's N4P and N4X nodes, which are due in 2024 and 2025 respectively. Based on claim specificaitons alone, it is hard to tell which technology will offer the best combination of performance, power, transistor density, efficiency, and cost. That said, SF4X will be Samsung's first node in the recent years that was specifically architected with HPC in mind, which implies that Samsung has (or is expecting) enough customer demand to make it worth their time.
- NVIDIA Launches Diablo IV Bundle for GeForce RTX 40 Video Cardson 9. maj 2023 at 21:00
NVIDIA is launching a new game bundle for its latest generaiton GeForce RTX 40-series graphics cards and OEM systems. This time, NVIDIA has teamed up with Activision Blizzard to offer a free copy of the latest iteration of their wildly popular action RPG series, Diablo IV. This promotion will run globally, starting now and running through June 16, 2023. For more than a month, customers purchasing GeForce RTX 4090, 4080, 4070 Ti, 4070 graphics cards or desktops containing one of them from various vendors will get a free digital download code of Diablo IV Standard Edition on Battle.net. The code for the title must be redeemed before July 13, 2023. NVIDIA Current Game Bundles (May 2023) Video Card (incl. systems and OEMs) Game GeForce RTX 40 Series Desktop (All) Diablo IV GeForce RTX 30 Series Desktop (All) None GeForce RTX 40 Series Laptop (All) None GeForce RTX 30 Series Laptop (All) None For NVIDIA, Diablo IV will also be a technology showcase, as it is set to support the DLSS 3 upscaling technology as well as the Reflex latency cutting out-of-box at launch. Ray tracing is also slated to be added at some point after the game launches. At retail pricing, Activision Blizzard's Diablo IV Standard Edition costs $69.99 at Battle.net, though NVIDIA is undoubtedly getting a bulk deal. It should be noted that this latest game bundle is just for NVIDIA's RTX 40 series desktop cards. Unlike the since-expired Redfall bundle, NVIDIA is not offering Diablo IV (or any other games) with GeForce-based laptops. Nor are any remaining GeForce RTX 30 series producted covered. Diablo IV will officially release on June 4, 2023. Source: NVIDIA
- AMD To Host AI and Data Center Event on June 13th - MI300 Details Inbound?on 9. maj 2023 at 16:45
In a brief note posted to its investor relations portal this morning, AMD has announced that they will be holding a special AI and data center-centric event on June 13th. Dubbed the “AMD Data Center and AI Technology Premiere”, the live event is slated to be hosted by CEO Dr. Lisa Su, and will be focusing on AMD’s AI and data center product portfolios – with a particular spotlight on AMD’s expanded product portfolio and plans for growing out these market segments. The very brief announcement doesn’t offer any further details on what content to expect. However, the very nature of the event points a clear arrow at AMD’s forthcoming Instinct Mi300 accelerator. MI300 is AMD’s first shot at building a true data center/HPC-class APU, combining the best of AMD’s CPU and GPU technologies. AMD has offered only a handful of technical details about MI300 thus far – we know it’s a disaggregated design, using multiple chiplets built on TSMC’s 5nm process, and using 3D die stacking to place them over a base die – and with MI300 slated to ship this year, AMD will need to fill in the blanks as the product gets closer to launch. As we noted in last week’s AMD earnings report, AMD’s major investors have been waiting with baited breath for additional details on the accelerator. Simply put, investors are treating data center AI accelerators as the next major growth opportunity for high-performance silicon – eyeing the high margins these products have afforded over at NVIDIA and other AI-adjacent rivals – so there is a lot of pressure on AMD to claim a slice of what’s expected to be a highly profitable pie. MI300 is a product that has been in the works for years, so the pressure is more of a reaction to the money than the silicon itself, but still, MI300 is expected to be AMD’s best opportunity yet to capture a meaningful portion of the data center GPU market. MI300 aside, given the dual AI and data center focus of the event, this is also where we’re likely to see more details on AMD’s forthcoming EPYC “Genoa-X” CPUs. The L3 V-Cache-equipped version of AMD’s current-generation EPYC 9004 series Genoa CPUs, Genoa-X has been on AMD’s roadmap for a while. And with their consumer equivalent parts already shipping (Ryzen 7000X3D), AMD should be nearing completion of the EPYC parts. AMD has previously confirmed that Genoa-X will ship with up to 96 CPU cores, with over 1GB in total L3 cache available on the chip to further boost performance on workloads that benefit from the extra cache. AMD’s ultra-dense EPYC Bergamo chip is also in the pipeline, though given the high-performance aspects of the presentation, it’s a little more questionable whether it will be at the show. Based on AMD’s compacted Zen4c architecture, Bergamo is aimed at cloud service providers who need lots of cores to split up amongst customers, with up to 128 CPU cores on a single Bergamo chip. Like Genoa-X, Bergamo is slated to launch this year, so further details about it should come to light sooner than later. But whatever AMD does (or doesn’t) show at their event, we’ll find out on June 13th at 10am PT (17:00 UTC). AMD will be live streaming the event from their website as well as YouTube.