Unbelievable Machine Learning Capabilities on Neural Engine | M2 Pro/Max

Posted by

INSANE Machine Learning on Neural Engine | M2 Pro/Max

INSANE Machine Learning on Neural Engine | M2 Pro/Max

Machine learning is revolutionizing the way we use technology, and with the introduction of Apple’s M2 Pro/Max Neural Engine, the possibilities are endless. The Neural Engine is a dedicated machine learning accelerator that is built into the M2 Pro/Max chip, and it is designed to power incredibly fast and efficient machine learning computations.

What sets the Neural Engine apart is its ability to process an astonishing 32 trillion operations per second, making it one of the most powerful machine learning accelerators on the market. This means that the M2 Pro/Max can handle complex machine learning tasks with lightning speed, allowing for seamless integration of machine learning into a wide range of applications.

One area where the Neural Engine truly shines is in image and speech recognition. With its advanced neural processing capabilities, the M2 Pro/Max can accurately identify and analyze images and speech in real-time, opening up new possibilities for interactive and personalized user experiences.

Furthermore, the Neural Engine’s unmatched performance and efficiency make it ideal for running complex machine learning algorithms, such as natural language processing and predictive analytics. This enables developers to create smarter and more intuitive applications that can learn and adapt to user behavior in real-time.

Overall, the combination of the M2 Pro/Max chip and the Neural Engine represents a significant leap forward in machine learning capabilities, and it has the potential to transform the way we interact with technology. With its insane processing power and efficiency, the Neural Engine is set to revolutionize the world of machine learning and open up new possibilities for innovation and creativity.

0 0 votes
Article Rating
44 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@haon2205
6 months ago

Those notches are an eyesore

@peterwan7945
6 months ago

I would love to know what kind of models can the neural engine handle and how fast can it handle it in comparison to cpu and you.😂😂😊

@onclimber5067
6 months ago

Would be amazing to see this test done with the new M3 models, since they are supposed to have a much better GPU.

@watch2learnmore
6 months ago

It would be great if you could revisit the Neural Engine's impact now that you're benchmarking with local LLMs.

@mahdiamrollahi8456
6 months ago

What we need in pytorch is to specify the device like cpu or cuda. What we have to do to use gpu or ane in Apple Silicon series?

@giovannimazzocco499
6 months ago

Did you consider repeating the benchmark for M3s?

@SiaTheWizard
6 months ago

Amazing examples and tests Alex. I was actually looking for YOLO test for Mac and this was the best video I've seen. Keep it up!

@henryjiang9990
6 months ago

SO which one should I get?

@AdamTal
6 months ago

Can you compare base stock m3 max to top stock m3 max (I don’t mean customization, just stock options) any ML benchmarks would be great. Thank you

@rhard007
6 months ago

Your content is the best for developers on youtube. You should have a Million Subs. Thank you for all you do.

@antor44
6 months ago

Very interesting video, but the data is explained too quickly, I have to set the YouTube player speed to at least 75%.

@urluelhurl
6 months ago

What are the advantages of using a M2 max pro that does not have a dedicated GPU when for a similar price I could buy a P15 with a RTX 5000 that comes already equipped with Ubuntu and Nvida data science packages?

@yongjinhong5533
6 months ago

Hey Alex, have you tried increasing the number of CPU workers? As most of the computation overhead is in transferring data from CPU to GPU in the macs.

@abusalem411
6 months ago

It’s all fun and dance, fluff and unicorn until you start working on real life projects, hear me out: I had a 14” M1 Max w/ 32GB RAM, while studied at uni. My best machine ever! Loved it… but when you have to spend half a day to solve a specific and rare bug due to the poor Tensorflow Apple Silicon support under time pressure, thats not much fun.😢 I had to solve quite niche and specific problems (like Natural Computing algorithms and simulations) and finding the right solution was challenging enough, let alone finding one that supports Apple Silicon. Excellent battery life, when it works it is stupid fast, but my next machine is a desktop PC. Machine Learning and DS aren’t mainstream usecases, takes some time for the libraries and devtools to catch up.

@gufransabri9012
6 months ago

I have an HP Victus laptop with RTX 3050 Ti Laptop GPU (4GB RAM). I use it for deep learning and the 4GB RAM is less than sufficient. I always run into OutOfMemoryError
I'm considering buying the Macbook Pro 14inch M2 Pro with 16GB RAM, the one that you tested in this video. Should I buy it? Will it be sufficient for me? Can someone give me an in depth answer.

My use case is not that of a beginnner AI student. But im also NOT training LLMs. For example, I'm doing a project where I'm training an Efficient Net b5 model and I cant use batch size more than 8. otherwise it gives me OutOfMemory Error

Anyways, can somoeone please help me. Should I buy the M2 Pro Macbook 14inch with 16GB ram?

@blacamit
6 months ago

Hello Alex Can you tell me which Linux distributions would be ideal for starting a career in programming? I'm a newbie java programmer.

@niharjani9611
6 months ago

Which IDE did you used in m2 macbook pro ? Hoping for an answet 😅

@kahlilkahlil9172
6 months ago

What's ur opinion would u rather spend money on the high-end MacBook or actually use Google Colabs to do deep learning stuff ?

@derekeadie6230
6 months ago

Just got an M1 Max 24c 64gb in Sept 2023 for video, feel like it’s still a good enough machine today.

@ameliabuns4058
6 months ago

the fact that you can only use the ANE with metal and swift is super annoying.
I wanna use tensor flow and python 😐