• Breaking News

    Wednesday, April 22, 2020

    Hardware support: Nvidia RTX Voice can be "hacked" to work on non-RTX GPUs

    Hardware support: Nvidia RTX Voice can be "hacked" to work on non-RTX GPUs


    Nvidia RTX Voice can be "hacked" to work on non-RTX GPUs

    Posted: 21 Apr 2020 07:13 PM PDT

    AMD Ryzen 3 3300X and Ryzen 3 3100: New Low Cost Quad-Core Zen 2 Processors From $99

    Posted: 21 Apr 2020 06:26 AM PDT

    [VideoCardz] Noctua: Intel LGA1200 heatsink mounting is identical to all LGA115x sockets

    Posted: 22 Apr 2020 01:00 AM PDT

    AMD Makes it Official: Ryzen 3 3300X and 3100X Launched, B550 Chipset Announced

    Posted: 21 Apr 2020 06:17 AM PDT

    Does the MacBook Air 2020 have an overheating problem?

    Posted: 21 Apr 2020 07:32 PM PDT

    Hardware Futurism

    Posted: 21 Apr 2020 03:15 AM PDT

    Watching the Hellblade II trailer for the zillionth time, I'm reminded of how far we've come. Dennard scaling or not, hardware is getting better at a ridiculous pace, and the future is packed with many possibilities. This post is a list of fundamental changes we might see over the next years; some are soon and inevitable, others are unlikely long-term wishes for twenty years out.

    I've purposefully excluded some things from this list, like graphene computers, but I've certainly missed others, so feel free to add. An entry on this list is not an endorsement.

    CPUs

    Desktop CPUs have been pretty sameish since Itanium died, with the cool stuff like Denver not panning out, but we're seeing some disruption from AMD and Arm.

    SMT4 (4 threads per core) goes mainstream

    You're all presumably familiar with '8 core, 16 thread' CPUs. SMT4 extends this to '8 core, 32 thread'. As cores get larger—and they will get larger—they have more spare throughput, so higher levels of SMT become more relevant. As thread counts increase, low-end CPUs might need higher SMT levels to reduce context switching overheads. There are unreliable rumours of AMD adding SMT4 to their server line a few generations out.

    Higher core counts

    Core counts have finally started rising again. Mobile has gone highly multicore, AMD has pushed it, and the 3990X has shattered the Overton window. Once programmers acclimatize, this growth will become self-fueling.

    GPU-CPU convergence

    People have been calling for CPUs and GPUs to merge almost since the GPU's inception. They might have been wrong about the timescale, but the general trend held true: GPUs have evolved into general-purpose SIMD processors, and CPUs have become manycores with fast SIMD. There are important advantages in specialization, but an eventual true hybrid is not yet out of the question.

    Apple cores for laptop and desktop

    Apple's phone and tablet CPUs are absurdly fast, going almost neck-to-neck with vastly higher clocked desktop chips at a fraction of the power and clocks. Rumours (and common sense) foretell Apple bringing these Arm cores to the rest of their lineup, shaking up the industry forever.

    Other Arm chips get popular

    Other Arm chips may not hit Apple's performance figures, but they are starting to compete in higher core count workloads, and on a fast pace of improvement. The small size of Arm's cores is great for making cheap manycore chips that still perform well, with the Graviton2, ThunderX3, and Ampere Altra, and the Snapdragon 8cx as an early entry to the laptop market.

    Architecture

    CPU architecture moves very hesitantly, but every now and again we see innovation.

    RISC-V

    RISC-V is an 'open standard' for a CPU instruction set and architecture that has gathered steam fairly quickly. It's unlikely to trade blows with x86 any time soon, but is making inroads in the embedded and coprocessor markets, and generally represents state-of-the-art among conservative ISA designs.

    Arm SVE

    SIMD, being able to operate on small arrays of data in parallel, is a major part of modern CPU's performance, and Arm's SVE is the first you're likely to use that isn't awful. Better SIMD means better compilers and happier programmers.

    Hardware capabilities

    Hardware capabilities is a security model that protects memory on a fine-grained basis using unforgeable tokens stored in the program directly. It offers vastly better memory protection than traditional memory models, and has been explored with the CHERI project on both Arm and RISC-V. However, it requires program-level changes to function.

    Tachyum Prodigy

    Tachyum announced their new CPUs at Hot Chips 2018, and seem on track to actually produce a product. They are taking another long-needed stab at a mostly-in-order VLIW-ish CPU, aiming for passable IPC at fast clocks and peak throughput that rivals GPUs. Preliminary performance claims look good, but so far it's just marketing.

    Mill Computing

    The Mill is an even more revolutionary design, with many large innovations from the core execution pipeline to multiprocessing to memory protection. I have issues with their performance claims, but more relevant than that they're just evidently never going to ship.

    Packaging

    Packaging, aka. how silicon is bundled together to make processors, is a hotbed of innovation right now.

    3D stacked chips

    DRAM and NAND flash are already 3D stacked, and this tech is coming to CPUs. Stacking logic allows for vastly denser packing of components, which is particularly important for memory access and other long-distance wiring. Intel's Foveros is a cool upcoming technology in this space. Pervasive stacking requires fundamental architectural redesigns, since it produces proportionally more heat over the same area, while making heat harder to remove.

    Wireless 3D integration

    Most silicon-silicon stacks communicate with physical metal connections. An alternative is 'inductive coupling', the same process used in wireless charging. This has some advantages and some disadvantages, with the main advantage being price. Refer to 'Thru-Chip Interface', and this Arm blog post

    Silicon interposers

    Chiplets, like used on recent AMD processors, communicate fairly slowly, at high power cost. Using silicon interposers, which can contain much denser wiring, opens up more opportunities for expansion, such as into lower power environments, bandwidth heavy workloads like GPUs, and faster memory access. Intel is pushing this with EMIB.

    MCM GPUs

    These are 'Multi-Chip Module' GPUs, aka. GPU chiplets. Intel is bounding into this space with some of their upcoming Xe GPUs, particularly 'Ponte Vecchio'. NVIDIA has published research on this too.

    x86 big.LITTLE

    Intel will soon bundle large, powerful and small, efficient cores on the same chip, a practice long used in smartphone CPUs to save power. Their first chip to do this is Lakefield.

    Silicon Photonics

    Optical communication has inherent advantages for bandwidth and signal integrity. Intel (have I mentioned them a lot recently?) have made major advances in this space.

    Waferscale Integration

    Silicon is generally size-limited to the 'reticle', the maximal area that can be exposed by the lithography machine when printing a silicon chip. Waferscale integration adds patterns between these reticle-sized areas so that the whole silicon wafer can work together as a single chip. This is an old idea, but it's always been hard.

    AI hardware

    Machine learning has made AI a big deal, and neural networks are amiable to very particular hardware innovations. There is so much in this space that I selected only the most innovative approaches.

    Cerebras

    Following from the previous topic, Cerebras have made a waferscale machine learning accelerator, with 18 GB of SRAM and other insane specifications. Their chip is model parallel, which means a neural net is spread physically out over the chip, more like an FPGA than a CPU. Cerebras have shipped to select partners.

    Groq Tensor Streaming Processor

    Groq is a lesser-known AI hardware startup with an incredibly innovative approach and impressive performance numbers. See The Linley Group's report for details. Groq's chips are available for cloud use for select partners.

    Mythic.AI

    AI is largely a matrix multiplication problem; Mythic approaches this by using analogue operations directly on NAND storage, which makes for good power/performance with much higher memory density than SRAM or even DRAM products. Mythic.AI has not yet shipped.

    Vathys

    Vathys tackles the memory problem by using 3D stacked memory with wireless integration, custom memory cells, and custom asynchronous logic. Watch their 2017 talk for details.

    Alibaba HanGuang

    We know almost nothing about Alibaba's chip, but it is included here because of its unusually high performance claims. If it's legit, it's doing something clever... but keep the emphasis on 'if'.

    Memory

    With DRAM scaling dying, there is a lot of activity in new memory technologies, above an already innovative baseline.

    Zoned Namespace SSDs

    Current storage standards do not map cleanly to SSD hardware. ZNS is an upcoming standard that exposes storage as sequentially-writable blocks of memory. This allows SSDs to reduce overprovisioning and include less DRAM and cheaper controllers, while offering greater performance and endurance. Using these SSDs does require software support from the file systems and databases running on them.

    3D XPoint DIMMs ('DC Persistent Memory')

    People are rightly unenthusiastic about current Optane SSDs, but consumer support for Optane DIMMs, which offer near-DRAM performance at potentially a much better price per bit, with support for non-volatile regions, should change that. Next-generation Optane has twice the layers, so twice the capacity. Intel is still subsidising Optane with Xeon sales for now. Micron will also enter the 3D XPoint market soon, though their plans around DIMMs are unclear.

    DDR5

    Next-generation memory has a few great features, but the headline features are double the capacity and much higher speeds; SK Hynix claiming they're working on as high as DDR5-8400. DDR5 will also have support for on-die ECC.

    NVRAM

    An addendum to the DDR5 standard, NVRAM adds support for non-volatile memory for technology with near-DRAM characteristics. This does not include Optane, which is not quite fast enough without a DRAM caching layer. NVRAM also includes support for much higher capacities than stock DDR5. I won't cover all the NVRAM contenders, just the most interesting two.

    Persistent DIMM filesystems and databases

    New software is needed to most effectively use Optane DIMMs and NVRAM, since they offer byte-sized accesses and vastly lower latency than even the fastest SSDs. Filesystems can also support DAX, which allows files to be mapped directly into process memory, using the underlying persistence of the storage, with no overhead.

    Nantero NRAM

    NRAM is carbon nanotube memory, using them as physical switches that can be toggled with electrical fields, all roughly as fast as DRAM but without the scalability issues of capacitors, and in a manner that is fully non-volatile. They have given very aggressive performance targets, such as aiming to fully replace DRAM, and perhaps later even expand into embedded caches.

    Spin technologies

    Produced (currently in small quantities) by Everspin, these various spin-based memories are based on polarized magnetic fields. They are applicable to both NVRAM applications and embedded caches.

    CXL, Gen-Z

    These are next-gen standards for interconnects, both local to memory and accelerators (CXL), and at a larger rack-scale (Gen-Z), offering support for coherent accesses.

    In-memory compute

    Since NAND flash and DRAM accesses are much slower than local memory accesses from cache, people have long wished to be able to do computation in memory directly. A recent attempt is by UPMEM, that offers DIMMs with integrated processors with very high aggregate throughput. Mythic.AI also attempts to do neural network calculations inside flash memory. Personally, I expect UPMEM will be fighting an uphill battle, but Mythic.AI's approach seems like an applicable niche if sufficiently performant.

    Graphics

    Graphics is always pushing the forefront of technology; this is the largest section.

    Xe Graphics

    Intel is entering the discrete GPU market soon. If nothing else, this should help the market's competitiveness. They are going to use MCM GPUs, at least in the high end, with Ponte Vecchio connecting either 16 or 32 chiplets in a single GPU (with 6 GPUs to a board!). Initial performance numbers from their lowest-end cards seem unimpressive, but that could be an early-hardware issue.

    Next-gen consoles

    The Playstation 5 and Xbox Series X have been announced, with very impressive performance numbers for both their GPUs and CPU, with fast SSDs to boot. Game graphics take a leap when consoles do, so the downstream effects should be significant.

    Other next-gen GPUs

    There isn't much to say about next-generation GPUs except that the rumours are dealing with big numbers, like 50% performance uplifts.

    Hardware decompression

    Consoles have hardware decompression to maximize asset streaming bandwidth in games. Specialized hardware didn't make much sense for desktop processors when storage speed was low (just do it in software), and on-SSD compression (which used to be a thing) has issues, but dedicated decompression accelerators now make sense for many desktop markets, and would certainly help games.

    Sampler Feedback Streaming

    This technique allows for fast, efficient, and live streaming of texture detail from the SSD to the GPU as texture data is needed. I've written about this in detail.

    Mesh shading

    The Geometry Pipeline is dead, long live the Geometry Pipeline!

    Mesh shading replaces the standard geometry pipeline with a largely software based implementation, allowing for mesh(let) compression, better culling, and better LOD implementations, and presumably a whole lot more once people get used to it. I believe this is the same thing as the PS5's Geometry Engine.

    Next-gen DLSS

    DLSS 2.0 is an image upscaling algorithm using temporal supersampling, aka. collecting pixel data over multiple frames. It shows very good results with high detail and very low aliasing, and practically doubles a GPU's effective performance. AI is getting better very fast, and version 2.0 is only the second step.

    Frame rate upscaling

    Frame rate upscaling allows rendering at one frame rate and adding generated intermediates either between or after the true frames, depending on where one sits on the quality-latency trade-off. As AI accelerators can be made very fast, this is likely cheaper than truly rendering all those intermediate frames, and AI frame rate upscaling is likely to have significantly less artefacting than traditional algorithmic methods.

    AI graphics enhancement

    If you've been following NVIDIA's GAN papers, like StyleGAN 2, you'll know that they are unfairly good at generating images. Eventually these techniques will allow live improvements of either textures or full video game frames. This won't happen tomorrow, but it is clear NVIDIA thinks AI graphics is the future.

    Foveated rendering

    The human eye has a very small sweet spot, outside of which we have low sensitivity to detail. With an eye tracker and a sufficient refresh rate, the computer can render at high detail only those areas of the screen that are viewed directly, with other areas rendered at low detail. Artefact-free upscaling methods, such as AI-based approaches, are needed to avoid negative impacts.

    Texture-Space Shading

    TSS has been done before, but Sampler Feedback is needed to make it efficient. Instead of rendering pixels at the screen, the textures of the viewed triangles are rendered on the object mesh from the camera perspective. This reduces artefacts like flickering, and has several extra gains, which I've split out into the next few points.

    Asynchronous shading

    With TSS, it is possible to render the pixels to the mesh at one 'shading rate' and resolution, and rasterize this to an image at a greater frame rate and resolution. These lower texture rates and resolutions are less visible than screen-space effects, since the triangle mesh is preserved at full fidelity. Further, meshlets may be shaded at rates depending on specularity, rotational velocity, or movement of a light source, with resolutions depending on other variables. Different properties of a texture might even be rendered at different rates.

    TSS & Virtual Reality

    TSS has two primary advantages for VR environments. First, both eyes can reuse all or some shaded texels. Second, texture-space foveated rendering will probably have fewer artefacts than traditional downscaling approaches.

    Texture-Space DLSS and interpolation

    This is purely conjecture, but I suspect that with the easy accessibility of mipmaps and the reduced artefacting of TSS, performing AI upscaling and frame rate interpolation (for specularity) on textures directly will produce better output than screen-space approaches.

    Simulations in games

    With consoles having faster multicore CPUs, and desktop CPUs going as high as 64 cores, more physics simulations will become practical in games. A 64 core CPU has a lot of headroom, should they come down in price over the next decade.

    Human-Computer Interaction

    Monitors and input devices don't change all that quickly, but we also get more media attention on longer timescale developments, like MicroLED, lightfields, and brain-computer interfaces.

    OLED monitors

    OLED has vastly superior colour and switching speeds to LCD. Samsung has reportedly exited the LCD business to focus fully on OLED displays. OLED so far has had limited impact on monitors due to its susceptibility to burn-in, but advances like Quantum Dot OLED displays, which use mono-blue OLED pixels, somewhat reduce susceptibility to burn-in.

    Dual-layer LCDs

    LCDs suffer from low contrast due to light leakage. Dual-layer LCDs greatly reduce light leakage by stacking two LCD filters over a standard backlight, with only the top layer screen running at full resolution. This closes the image quality gap between LCDs and OLED using the cheaper panels of the two.

    MicroLED

    MicroLED is an inorganic variant of OLED, generally with no susceptibility to burn-in and much higher supported peak brightness, and potentially even better black levels. MicroLEDs can be fabricated fairly efficiently, but it is currently very challenging to embed these into usable panels at affordable prices. However, it seems to be the future of displays, and is under very active R&D.

    MicroLED-on-wafer displays

    MicroLEDs are expensive to pick-and-place, but can be fabricated at very high densities with traditional semiconductor processes. In some applications, small, incredibly high density displays are actually superior, such as VR and AR, and microLED has a much easier entry to that market. JBD have the most impressive demos of this: up to 2 million nits, 5000x4000 resolution, 10,000 dpi, at 1,000 Hz, all in displays that weigh a gram or less. Properly calibrated full-colour displays are WIP, but the promise is clear.

    CLEARink

    CLEARink is an ePaper-like ink-based reflective display, but structured like an LCD, built in a modified LCD fab. They support colour using a filter, have better contrast than ePaper, and can make both bistable and (moderately) high-refresh rate panels.

    Advanced Color ePaper

    We've had colour ePaper for a while using colour filters, but traction has been limited in consumer markets because of the reduced contrast. E Ink have figured out how to provide colour ePaper without a colour filter, by putting all coloured pigments in every pixel.

    Lightfield cameras and displays

    Lightfields capture the full field of light over a plane, capturing all views at all angles of a scene. A simplified problem exists for VR, where a lightfield display only needs to handle all focal planes. Various approaches exist to tackle this, such as stacking a few projections in each focal plane, or using microlens arrays over ultra-dense backing displays.

    EMG interfaces

    Oculus recently bought CTRL-labs, producers of a supposedly high-fidelity EMG device that reads neuron signals to the hand from a device strapped around the wrist, allowing reconstruction of muscle movements, with more bandwidth than from traditional controller based approaches. CTRL-labs have shown people typing with this hardware.

    Neural interfaces

    Going to the extremes, Neuralink is looking to insert physical neuron detectors directly into the brain. Watch their launch event or read their paper for details. This is a consumerization of technology that has been proofed out in universities, and we're at the point where it's moving from questions of possibility to questions of feasibility.

    Next-gen VR

    We don't know anything about next-gen VR specifically, but VR will eventually become the endgame for interactive experiences, and it is all a question of refinement. Keep an eye out for this.

    Circuitry

    The fundamental backbone of computational advances, and one of the most advanced tasks mankind has tackled, this section covers both short- and long-term advances in this space.

    New process nodes

    TSMC's 5nm and 3nm, Intel's 10nm+ (aka. 'working 10nm') and 7nm, and perhaps Samsung's future nodes too, all represent important steps forward for the industry. There's not much to say specifically about these, except that it looks optimistically like we'll stay on a good cadence until at least TSMC 3nm, and Intel's 7nm should bring fresh competition to the market.

    Nano OPS

    Nano OPS is a replacement for much of the established semiconductor fabrication pipeline, excluding lithography. They claim it costs one to two orders of magnitude less, supports a greater variety of materials, and comes at no loss to fidelity. They 'just finished the qualification [...] with a $35B US electronics company' (presumably Micron) and are 'working on the equipment that would go into their fabs'. If true, this would legitimately revolutionize the industry. Here's an interview with their CTO.

    Quantum

    Quantum has had recent breakthroughs, particularly from Google and IBM, who both have 53 bit quantum computers. We are a long way from practical applications, and they are more limited than most of the public understands, but they truly represent cutting-edge physics and the effort to build them is sincere.

    New structures and materials

    Upcoming process nodes are introducing new techniques and materials, like nanowires and germanium. See semiengineering.com to gain insight.

    Vacuum-Channel Transistors

    Affordable nanoscale fabrication has sparked investigation of vacuum channel transistors—the same technology used in vacuum tubes. At sufficiently small scales, the downsides of vacuum channels (such as power, and surprisingly also the need to pull a vacuum) significantly reduce, and vacuum channels are otherwise extremely efficient electron carriers, allowing for much higher switching rates than semiconductors. This technology is in very early prototype stages, so don't hold your breath.

    Asynchronous Logic

    The clock signal is a surprisingly power hungry aspect of modern chips, and acts as a performance limiter. Asynchronous logic tracks its own readiness, removing the need for a clock signal. Asynchronous logic has a lot of problems, some of which are integral to the technology—it costs wires to keep track of readiness—and some of which is simply an issue of tooling, but it has long been an area of curiosity with occasional experimentation by both academia and major industry players.

    Optical computing

    Light has several theoretical advantages over electricity, so there is continual interest in using it for computations directly. The best early progress has been in using light for matrix multiplication for AI applications.

    Integrated FPGAs

    FPGAs are much faster at a subset of computations than CPUs are. Several companies have experimented with CPU-FPGA hybrids, like Intel's Xeon Scalable Gold 6138P. It is likely these products stay niché.

    Superconducting computing

    Superconducting circuits can operate at vastly better frequencies with a fraction of the power cost per computation (though it takes some work to keep them cool). Superconducting circuits do exist for very specific use-cases, at very limited scale.

    Reversible computing

    Unlikely to be relevant for a long while, eventually computing may hit something called Landauer's principle, which gives the theoretical lower limit for energy consumption by computations—this is still millions of times less energy than currently used by computers. Landauer's limit can only be avoided with logically reversible computing, which preserves information needed to undo the computation.

    Other

    Here are a few more things of potential importance.

    Starlink

    SpaceX is putting satellites in low earth orbit to serve high-bandwidth internet, globally, at better-than-fibre latency. This will have a major impact on the global internet, both in terms of service accessibility to the underserved, and reducing some long haul latencies as much as a factor of two. If SpaceX's Starship rocket pans out, space tech is likely to become very significant.

    Fuchsia

    Not quite hardware, but still subreddit-relevant, Fuchsia is a truly new operating system by Google. Unlike Android, it is not based on Linux. Fuchsia does a lot of things right, such as its security model, and is making steady progress.

    Better batteries

    Battery technology doesn't improve as fast as silicon, but it does improve, regardless of how bitter people are that a battery made in a university lab from an article in 2015 hasn't made product yet. Solid state batteries look to be starting production ramp, with expectations of mass production this decade. Tesla has a Battery Day soon, where they should make their own outlandish claims about the future.

    Off-topic bonus: Fusion

    Fusion power research has recently had a renaissance, with lots of new companies entering the space. These aren't your old many-billion ITER projects; new technology has made the problem far more tractable. Designs based on high temperature superconductors came into prominence in 2015, with MIT's ARC; you can learn about this approach from this talk. First Light Fusion is another challenger, attempting an interesting take on inertial fusion using specially-shaped pellets.

    submitted by /u/Veedrac
    [link] [comments]

    Crucial P2 & P5 PCIe SSDs announced

    Posted: 21 Apr 2020 09:38 PM PDT

    On WD Red NAS Drives - Western Digital Corporate Blog

    Posted: 21 Apr 2020 08:41 AM PDT

    NUC 9 Megathread with Intel’s NUC Team (come ask questions!)

    Posted: 21 Apr 2020 10:44 AM PDT

    [VideoCardz] Exclusive: GIGABYTE AORUS Z490 Motherboards are PCIe 4.0 Ready

    Posted: 22 Apr 2020 01:17 AM PDT

    Polish review of Hyperbook NH5 - Clevo laptop with desktop Ryzen CPUs

    Posted: 21 Apr 2020 10:24 AM PDT

    Acer Nitro 5 2020 will be available with AMD Ryzen 4000, best GPU option is reserved for Intel

    Posted: 22 Apr 2020 12:52 AM PDT

    Razer’s new Blade Stealth 13 has the world’s fastest 13.3-inch screen

    Posted: 21 Apr 2020 08:32 AM PDT

    Missing features of WiFi 6 routers

    Posted: 21 Apr 2020 08:15 AM PDT

    1.What features are missing from WiFi 6 routers like Netgear,Asus,TP link,etc.?I know some common features like WPA3 and Target Wake Time are missing but what else?

    https://community.netgear.com/t5/Nighthawk-Routers-with-WiFi-6-AX/What-is-Target-Wake-Time-Wi-Fi-6/m-p/1707070

    2 Also is it possible to patch some of the features through firmware updates?

    3.Will the router companies release revised versions after the draft for WiFi 6 is finished for their previous and current WiFi 6 routers?

    I've also found a link online which points to a table which has the dates for the WiFi 6 drafts:

    http://www.ieee802.org/11/Reports/802.11_Timelines.htm

    submitted by /u/KMSpo
    [link] [comments]

    5700 XT and RX 580 in same system for folding?

    Posted: 21 Apr 2020 10:22 PM PDT

    Hello, I have a 5700 XT in my PC, but I have an extra RX 580 just laying around. From some quick 10min research, people claim that its possible. I know that It wont run in crossfire and it wont give me a performance boost in gaming, I was just wondering if it is possible for the purpose of folding(for folding@home).

    Is this even possible? If so, what should I do about drivers?

    submitted by /u/AMushyGrape
    [link] [comments]

    Lian Li Lancool II Mesh release date? Any news or rumors?

    Posted: 21 Apr 2020 05:03 PM PDT

    Anyone have any info when the Lancool II Mesh case might realistically launch? I almost committed to buying it several times this week because I'm impatient and I really want to build in one. The last scoop I found was on Anandtech from CES coverage in January and they stated the mesh version would be released in April, however this was long before the Coronavirus wreaked havoc on production lines. Does anyone have any news on this?

    submitted by /u/weztmarch
    [link] [comments]

    Kingston’s Canvas Select Plus, Go Plus & React Plus SD & MicroSD cards Review

    Posted: 21 Apr 2020 06:05 AM PDT

    Forget the upcoming PS5 vs Xbox Series X battle, the PlayStation 4 is still busy destroying the Xbox One in global hardware sales

    Posted: 21 Apr 2020 07:34 PM PDT

    No comments:

    Post a Comment