January 18, 2023
“The talent level required to train a massive model with high FLOPS utilization on a GPU grows increasingly higher because of all the tricks needed to extract maximum performance.”

How Nvidia’s CUDA Monopoly In ML Is Breaking – OpenAI Triton & PyTorch 2.0 https://t.co/zrOtrOBKVX

This is one of the many thoughts I post on Twitter on daily basis. They span many disciplines, including art, artificial intelligence, automation, behavioral economics, cloud computing, cognitive psychology, enterprise management, finance, leadership, marketing, neuroscience, startups, and venture capital.

I archive all my tweets here.