Comparing Reality to Apple’s Memory Claims | RTX4090m vs Apple

Posted by

Reality vs Apple’s Memory Claims vs RTX4090m

Reality vs Apple’s Memory Claims vs RTX4090m

When it comes to technology and the constant innovation in the field of electronics, it can be difficult to separate reality from marketing claims. This is especially true when it comes to memory claims made by companies like Apple and the performance of graphics cards like the RTX4090m.

Apple’s Memory Claims

Apple has made bold claims about the memory capabilities of its devices, often touting the amount of storage space available on its products. While it is true that Apple offers a range of memory options for its devices, including iPhones, iPads, and Mac computers, it is important to note that the actual usable storage may be less than advertised due to the operating system and pre-installed apps taking up space.

RTX4090m Performance

The RTX4090m is a high-performance graphics card designed for gaming and professional applications. Nvidia, the manufacturer of the RTX4090m, has marketed the card as being capable of handling the most demanding games and graphics-intensive tasks. However, it is important to note that the actual performance of the card may vary depending on factors such as system configuration, cooling solutions, and software optimization.

Separating Reality from Marketing Claims

It is essential for consumers to do their research and not solely rely on marketing claims when making purchasing decisions. When it comes to memory claims, consumers should consider the actual usable storage space and whether it meets their needs. For graphics cards like the RTX4090m, it is important to look at independent performance benchmarks and user reviews to get a better understanding of its capabilities.

In conclusion, reality can often differ from the claims made by companies like Apple and the marketing materials for products like the RTX4090m. It is crucial for consumers to critically evaluate these claims and make informed decisions based on their specific needs and requirements.

0 0 votes
Article Rating
40 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@xeridea
4 months ago

The 1 GB/s on the 4090 is in line with the specs. This is essentially the bandwidth of the RAM on the card. The other numbers are all going across the PCIE bus, and also effected by system RAM and CPU speeds. Apple is lying a bit saying their bandwidth is 10x that of the fastest desktop card, when in reality it is less.

@jasonhurdlow6607
4 months ago

FYI, a 4090 laptop chip is not an AD102 chip (used in a desktop 4090), it's really an AD103 (4080 desktop) chip. Try in on a desktop 4090.

@mddunlap03
4 months ago

4090 mobil is a 4080

@williamtalbot5040
4 months ago

Atari Jaguar anyone

@AzumiRM
4 months ago

Now try and run …. Say… Cyberpunk on an apple product with ray tracing enabled and see what sort of performance you get. Show us that Vs. a 4090 core i9 14th gen and 64 gb ram. I guarantee the winner will be the windows PC. It will cost far less than the best apple laptop or desktop and perform far better. Apple is useless for gamers.

@fungo6631
4 months ago

PCMR wins again!

One problem about shared RAM, as seen with the N64 back in the day is that the GPU and CPU have to fight for RAM access. You need to write code differently than in non shared architectures, smaller code being faster due to simply fitting in the cache. Now ofc, modern systems have way more cache than the N64, but the logic still applies. If either side is a memory hog you're gonna have a bad time.

@VultureGamerPL
4 months ago

This msi laptop looks like a child's toy

@victormtzc
4 months ago

Dude what are you saying?, The memory for deep learning at GeForce 4090 is 1080 GB/s. Most of the deep learning computation are done on professional desktops not at laptops. Nobody wants to do deep learning on their laptop that cost 4000 dollars and let it het cook for days. You need a desktop that you can leave it running. You can buy any macbook pro at msrp but you cannot buy a 4090 at MSRP, think about it. Most of early starters doing deep learning are going to Nvidia platform that took over 16years to get build. If you were a windows guy, you will saying the same thing of the AMD and Intel AI accelerators, the fact is these accelerators are only useful on apps that are designed to use them not on real research and training of model.

@indeedDE01
4 months ago

7:25

I can run a 13B model just fine on a 16GB system and a 7B model even works on 8GB cards.

@degenerate_kun_69
4 months ago

i think you forgot to plug the 4090 in

@anthonyleclerc299
4 months ago

don't look up the PNY NVIDIA Quadro RTX 8000 price

@EfrainDeLaRocha
4 months ago

buys a 3000 dollar machine to run a 5 minute test to debunk one question.

@Elyron2004
4 months ago

the first mistake was buying msi

@CRC.Mismatch
4 months ago

Where's the link to the repository? 😑

@leapbtw
4 months ago

mobile GPUs are NOT the same as desktop GPU. You should try with a desktop 4090

@mr.i1463
4 months ago

Why somebody use a notebook for machine learning?
Even when the have build in powerful hardware, the hardware is limited by cooling power. A notebook is never good at cooling. So the hardware is limited by default to not overheat and reduce its tdp. Also CPU machine learning on normal pcs? Madness! Mostly nobody use this kind of method. Everybody goes to gpu and there are even special cards for it.

Like the "NVIDIA RTX 6000 Ada" but they are expensive… but they are beasts. Also big machine learning projects on a cpu is not the good way. GPUs are for big data intensive learnings. CPU is only nice if you have small datasets. So its not the best idea… to use a mobile mac for such huge project. Starting with a rtx 4090 with 32gb is a good start for semi-pro. After that no way around pro cards.

@lenoirx
4 months ago

Common Apple L

@donutwindy
4 months ago

1TB/s seems reasonable. GDDR6x says 912-1152 GB/s. I would have expected that limited to desktop but If 4090m is running that bandwidth speed on a laptop, its pretty impressive.

I am surprised, however, at how fast Apple is. I mean its slower from a bandwidth perspective, but its a LOT of "effective" VRAM.. and that would be impractical using GDDR6x at a reasonable price. If you need that much, apple makes sense. NVidia is targeting gamers specifically and don't expect anyone to use a laptop for AI.

The regular 4090 has 24gig.. but NVidia then starts adding 0's to the price when you want more. because they assume if you need more, you are doing AI and have deep pockets.

@Smirnoff67
4 months ago

What a clown world we live in..

@jusk2ru
4 months ago

Imagine when this guy discovers that dedicated gpu's exist.