According to Gartner, talent scarcity is the key challenge with digital business initiatives. Aren’t these skills most wanted to lead ANY kind of initiative? https://t.co/ZXkx9LDMh7
In 2016, Nature was asking to 1,500 scientists across multiple disciplines: “Is there a reproducibility crisis?”. The answer was “yes” (https://t.co/o4UmEx2xmf). That answer applies to #AI as well because there’s not enough transparency. That’s the problem, not shadow AI.
And you thought it was just about manipulating any face to say anything we want > “in a survey performed with workers from Amazon’s Mechanical Turk, 43% thought that the #AI-generated objects were real” Google’s AI realistically inserts objects into images https://t.co/QBGQpbDknw
Nobody challenges their TCO spreadsheets. I learned it after thousands of interactions with F500/GF2000 customers in the last 10 years. Also, the very limited adoption of capacity mgmt/optimizations solutions in the last decade is proof that people underestimate that complexity. https://t.co/kVkFWAEYRS
In this report, Gartner praises the advent of synthetic data as the way to democratize #AI. But nothing is being said about the fact that if the model is not extremely accurate, the resulting synthetic datasets can be biased.
The market shortage of #AI talents doesn’t just mean that it’s difficult to build the smart applications that you want to build. It also means that there’s no way for you to know if the data scientists you already have are doing their job correctly/efficiently/scrupulously or not
Reading a Gartner report about “shadow #AI”, a term used to describe the scenario where users bring their own data to build their own AI models. Which seems to suggest nefariously biased algorithms. But “sanctioned AI” isn’t less biased. Ask startups where they get their datasets