- [PDS ATTENTION/PSA] NZXT H1 Case Safety Issue/Recall - "We have identified that the two screws that attach the PCIe Riser assembly to the chassis may cause an electrical short circuit in the [PCB]"; potential sparking/fire hazard
- Open-source oriented tests of M1 Mac Mini. Includes arm64 Ubuntu in a VM.
- BLENDER Running on the NEW M1 Macs (CPU & RAM Usage, Thermals, Rendering...
- Scalable memory clocks?
- Console texture decompression and PC technologies
- Interestingly, an four layer ATX sized motherboard via JLCPCB isn't too expensive anymore
- ASUS RTX 3090 STRIX OC VS. MSI RTX 3090 SUPRIM X
- Level1Techs -Liqid PCIe Transports: Liqid Composable Infrastructure From Small Scale to the Data Center - Sumit Puri Interview @ LIqid
- Deepcool AS500 Review
- True “Continuity” across Apple ARM devices?
- Mobile Tiger Lake CPU holds well against Apple's M1
Posted: 28 Nov 2020 06:43 PM PST |
Open-source oriented tests of M1 Mac Mini. Includes arm64 Ubuntu in a VM. Posted: 28 Nov 2020 06:16 AM PST |
BLENDER Running on the NEW M1 Macs (CPU & RAM Usage, Thermals, Rendering... Posted: 28 Nov 2020 11:19 AM PST |
Posted: 28 Nov 2020 05:51 PM PST A thought just crossed my mind - CPU clocks vary depending on load, increasing frequency when under high load and dropping under low load, reducing the power consumption in turn. GPUs do the same. Both have predetermined limits that they're designed to be operated within so as to not become unstable or degrade the component's health. The whole purpose is to have as much performance available when it's required (high load) and to be as energy/temperature efficient when the system is idling (low load). So why is it that we don't see memory clocks scaling in real time? what are the practical limitations you could think of? Surely if XMP is stable, or an end user can find a stable maximum speed for the set at particular timings, just reducing the clock speed and leaving the timings alone under low load should allow a reduction in energy consumption and heat output, while performance isn't a priority.... Or am I missing something? [link] [comments] |
Console texture decompression and PC technologies Posted: 29 Nov 2020 02:26 AM PST Hi. As far as I understand, game consoles use hardware chips that accelerate the compression and decompression of game textures to load them quickly into the memory. In several articles I read that a special PS5 chip is comparable to 9 CPU cores for these tasks, and the chip in the Xbox can frees 5 CPU cores from work. I'm pretty weak in computer theory. What is used for compression and decompression on the PC, CPU resources, right? What algorithms are used? Are RTX IO and DirectX 12 Direct Storage technologies inherently similar to compression and decompression technologies in game consoles? [link] [comments] |
Interestingly, an four layer ATX sized motherboard via JLCPCB isn't too expensive anymore Posted: 29 Nov 2020 02:18 AM PST https://twitter.com/yuhong2/status/1332991126580514816/photo/1 I used a quantity of 10 and 300mm x 250mm, and to summarize "Engineering fee:$32.00 Large Size:$5.80 Board:$77.80" [link] [comments] |
ASUS RTX 3090 STRIX OC VS. MSI RTX 3090 SUPRIM X Posted: 29 Nov 2020 02:14 AM PST Hey guys. I'm building a new PC, but I'm really unsure about which GPU to buy for my build. It stands between the ASUS RTX 3090 STRIX OC and the MSI RTX 3090 SUPRIM X. I like the design more on the MSI, but if the ASUS is performing better, then I'm really not sure which one to buy. Do they perform the same? I've heard people say that the ASUS is performing 10% better than the FE — is it the same with the MSI version? [link] [comments] |
Posted: 29 Nov 2020 01:39 AM PST |
Posted: 28 Nov 2020 01:24 PM PST |
True “Continuity” across Apple ARM devices? Posted: 28 Nov 2020 10:53 PM PST First post, and I'm looking for a technical engineering answer so if I should post this somewhere else please point me to a subreddit. I've been looking forward to the launch of the M1 macs because I thought the ability to run iOS apps on Mac would allow for true workflow continuity. For example, right now if I open a webpage in Safari on my iPhone I get a icon on my Mac which allows me to open the same webpage, but it's the same if I just copied the link and pasted it on my mac. True continuity would be if the webpage on my Mac was literally the same browser session migrated from my phone to my mac. This probably isn't a good example since it involves other servers and cookies and stuff, but the same principle of uninterrupted flow should work between apps that can run on both devices. My understanding of computer architecture makes me think this would be really hard but maybe possible. If Apple can manage to implement this, I will be forever sold to the Apple ecosystem. Can give me a technical answer as to if this could work and why/why not? [link] [comments] |
Mobile Tiger Lake CPU holds well against Apple's M1 Posted: 28 Nov 2020 08:20 AM PST |
You are subscribed to email updates from /r/hardware: a technology subreddit for computer hardware news, reviews and discussion.. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment