Virtual Reality (VR) has yet to take off. In part because the technology is still too far from giving us the feeling of reality. In part because there’s no killer app.
I would love to play a hyper-realistic VR version of Assassin’s Creed, and maybe gaming will be the only killer app that the VR industry will ever need to sustain itself and grow exponentially.
But, somehow, I believe that there will be the opportunity for at least another killer app when technology is mature enough. Something that might appeal to a much broader audience than the gaming community.
“How does it feel to live like a rich person?”
At least once in our lifetime, many of us have wondered that. Either on the way to the top or resigned to a life that will never reach that altitude.
“How does it feel to live like a pop star?”
“What happens in the life of a stockbroker?”
I am not talking about an approximate simulator like a VR edition of The Sims. I am talking about a hyper-realistic recording of the life of an actual pop star or a stockbroker, inclusive of all the people and experiences this real-world person might meet in 24 hours.
I’m also talking about an artificial intelligence (AI) that manipulates this hyper-realistic recording and adapts to the choices made by the user, maintaining consistency with the world of the selected lifestyle.
In other words, if the user is living the life of a pop star and decides to abandon the stage to instead go to the grocery store, the AI unfolds the events in a way that matches what would probably happen to Justin Bieber if he did the same.
Once technology allows a system like this, the opportunity to live a completely different life are endless. That’s why we watch movies and TV series. It’s just not for the narrative, or Hollywood would have been bankrupt long ago. It’s to live a completely different life for two hours without all the risks and fears and hard work that come with an actual life change.
“What does a Navy Seals see in 24 hours?”
“What happens in the life of a doctor that joins Médecins Sans Frontières?”
That said, I don’t think that living the life of a pop star or a Navy Seals for a day is enough to make what I’m describing a killer app.
To reach that level of traction, I believe we have to tap into the hidden curiosity and dark secrets that every human being harbours.
“How does it feel to be a drug dealer?”
“What’s the life of a prostitute?”
It can get much worse than this, but you got the point. The more despicable, the more morbid the curiosity, the higher the price.
There are plenty of reasons against building a VR app that can reproduce the full spectrum of human experiences, including the most sordid ones. There are some reasons in favour of building it, too, but it’s hard to imagine somebody making a case for any other reason than pure commercial interest.
More than that, the ethical and moral guardrails that exist in 2022 would make it impossible to reach commercial success with an app that reaches those limits. But, maybe, those guardrails would be different a century from now. What is morally and socially acceptable today has dramatically changed from the Victorian age.
I’d call it “life tourism”.
As I wrote this, I remembered a movie I didn’t see in a long time: Total Recall. It is based on a short story by the legendary science fiction writer Philip K. Dick: “We Can Remember It for You Wholesale“.
Memory implants are significantly beyond whatever marvel we’ll be able to achieve with a hyper-realistic VR. But I suppose that some aspects of that story match what I’ve described here.
With the term “Frictionless IT”, Red Hat means an enterprise IT that just works, reshaped after the experience offered by modern consumer-grade public cloud services, which business users are growing to expect.
What does Frictionless IT have to do with Red Hat and the IT organisations that we serve? Simple: if we don’t start moving towards Frictionless IT, we all risk irrelevance.
Current generations of IT professionals are experiencing a growing disconnect between Enterprise IT and Personal IT.
Enterprise IT remains reliable, but in most cases slow to procure, complex to use, and overall frustrating. Think about your expense report system.
Personal IT is evolving into a set of instantaneously available, incredibly easy to understand and blazing fast at executing the tasks that they are supposed to execute. Think about Gmail, Dropbox, Evernote, IFTTT, and the plethora of other public cloud services that we all interact with on daily basis through our phones, tablets, and laptops.
The first problem with this split brain between Personal and Enterprise IT is that our brain is exactly the same, inside and outside the office. Any interaction with this emerging Personal IT raises the bar on how the IT experience should be. The more we use Gmail, Dropbox, Evernote and IFTTT in our personal life, the more our expectations grow for a similar experience at work. We wonder more and more, “if my Personal IT is such a breeze to use, why does my Enterprise IT have to be miserable?”
The second problem is that current generations can endure frustrating Enterprise IT only because that’s all that they have experienced for decades. New generations will not be so forgiving. The kids in college today, and those who just started their first job in a new, exciting startup, are growing used to only one kind of IT experience: the frictionless one.
At some point in the near future, these kids will land more reliable and less stressful jobs in large enterprises. It will not be just one or two individuals with a different set of expectations joining a typical bank or insurance company. It will be a whole generation that permeates every department of an end user organisation, from marketing to engineering, with a completely different set of demands and expectations. The overwhelming majority of IT organisations, and the traditional solution providers that support them, are completely unprepared to meet that demand.
At Red Hat, we recognise this challenge. In it we see an opportunity to simplify enterprise software in many dimensions, from the user interface to the underlying architecture, through not only the technology, but also aspects like documentation, licensing and much more.
We believe that at least three ingredients are necessary to meet the demand for frictionless IT:
- Ease of use
Ease of Use
A key enabler for a Frictionless IT is a smooth user experience (UX). The user experience is defined by the quality of an interaction between the human and the system, and it takes place when you deploy, integrate, customize and use enterprise systems. Intelligent installers and self-contained binaries, simplified back-end architectures, supported out-of-the-box plug-ins, modular front-ends, consistent UIs and even coherent documentation all contribute to improve the quality of the UX. However, very few organisations in the world look at these aspects from a holistic standpoint and take a user-centric approach. For example, the user interface (UI), in both commercial-off-the-shelf and custom-made applications, is one of the most overlooked aspects of enterprise software.
If you think that investing in state-of-the-art UI is unnecessary, or not worth the effort, think again. The primary reason why some public cloud offerings become overnight successes at a planetary scale is their intuitive UI. In our Personal IT we are already getting used to intuitiveness, and the demand for it is supported by the broad market offering. We have already reached the point that when an app on our smartphones is too complex to use in the first few minutes, we simply delete it and download an alternative. There’s no second chance for the app that is not frictionless.
Now let’s go back to the upcoming generation of technology consumers. Even among the most technical of them, some may have never built a computer by screwing a motherboard to the case (like many of us did, including me), used a command prompt or plugged in a network cable. Those users will expect that installing software will be as frictionless as deploying a virtual appliance, plugging a cable will be as frictionless as drawing a line on a service catalog UI and so on.
If the IT organisations of tomorrow don’t deliver that kind of ease of use, future generations of business users will simply circumvent them, more than today, relying on external cloud service providers. And to meet the expectations of future generations, the UX in enterprise software has to dramatically improve.
Red Hat understands the challenge, and we are working hard to influence the open source projects that we support in the short and long term. For example, our commercial cloud management platform, CloudForms, comes as a single virtual appliance; this is in contrast to other cloud management platforms that may require 6 to 9 different tools (and not all of them available as virtual appliances). We consider this a prime example of the effort we put in engineering more frictionless enterprise solutions.
A second key enabler for a Frictionless IT is speed. If the interface is pretty but you still need to take 20 steps (or 20 weeks) to get the job done, it’s not frictionless. We are already know that speed deeply influences the UX, to the point of impacting search engine rankings, thanks to the enormous research conducted around aspects like loading time in web development. And yet, it took a lot for the industry to realize that the same human brain which doesn’t tolerate a very slow page load very likely won’t tolerate a very slow enterprise IT experience.
Speed has become an increasingly important factor in the last five years, to the point that the industry constantly mentions agility as the most desired attribute for business and development models. Of course agility is not just speed, but speed is a very big part of it. Which is one of the many reasons why, for example, we are seeing a shift of interest from virtual machines (VMs) to application containers.
Operating system and application virtualization are as old as (and in some cases, older than) hardware virtualization. More than ten years ago, the emerging virtualization industry was rich with technology startups focused on all three approaches. As we know, eventually the mainstream audience preferred VMs over what we used to call operating system partitions and application layers, but today we are experiencing a second coming of the latter technologies because customers’ business needs are changing and evolving, as they always do.
Ten years ago, IT organizations’ primary challenge was modernizing the data center while maximizing the ROI on existing hardware equipment, and hardware virtualization brilliantly helped to accomplish the goal. Today, IT organizations’ primary challenge is addressing the business demand as fast as possible, because there’s now a competitor that never existed before: the public cloud provider. Application containers can be deployed in seconds rather than the minutes needed for VMs, significantly shrinking the reaction time for a variety of scenarios, including scaling out a web application to address an unexpected traffic peak and avoiding a fatally slow loading time.
Application containers are just one example (and to be fair, they have more virtues than just speed of deployment); we constantly look at solutions that can dramatically increase operational speed.
A third enabler for Frictionless IT is seamless integration between enterprise products and the ancillary services necessary to make it work or unlock their full potential. No successful software or hardware comes without a certain degree of integration with the existing enterprise IT environment, but the extent of that integration makes or breaks the UX, in turn impacting on users’ productivity.
Integration can happen at the back-end level and at the front-end level. The latter is rarely considered, so I’ll focus on that in this post. To clarify the deeply underestimated importance of front-end integration, I always use the analogy of the smart calendar.
In many cases, in preparation for a business meeting we always check a couple of apps on our smartphones: the calendar app, to know when, where, and how we need to meet; and the map app, to know how to get there. In a perfect world, especially if the business meeting is a delicate negotiation with parties you’re meeting for the first time, we might want to check at least another couple of apps: LinkedIn, to learn more about the people that we are going to meet; and Twitter, to learn more about what those people have to say about topics that may be relevant to the negotiation. Out of the four, it is the last two apps that could provide the intelligence necessary to successfully close the negotiation. But because the information is spread across so many different apps, which dramatically increases the friction, we limit ourself to checking the first two, the indispensable ones. Crucially, because of the friction, we don’t check the information that could be most valuable for the meeting, which deeply impacts our effectiveness.
Thankfully, there’s now a better way. A wave of so called smart calendar apps are emerging (and rapidly being acquired), with their biggest value being the ability to blend the front ends of the aforementioned four apps into a single, consistent UI that dramatically reduces friction. If you have ever tried smart calendars like Tempo or Sunrise, you have an idea.
Enterprise IT has to follow the same path: improve integration to minimize the friction (which in this case can appear as a steep learning curve) and maximise the productivity of the enterprise audience.
Ease of use, speed, and integration are key ingredients to dramatically improve the enterprise software (and hardware) UX. But what’s the difference from the past, you might ask. User experience has been considered as a key differentiator since the late 60s by companies like IBM. And there are plenty of ROI calculators showing that UX has a quantifiable impact on business. The difference is that now enterprise users have choice, and enterprise IT organizations have competitors. And the choice is incredibly broad and incredibly accessible. If IT organisations fail to deliver Frictionless IT, lines of business (LoB) will simply go elsewhere and get the job done with the tool that is most convenient (simplicity, not cost) out of the many available.
A LoB doesn’t care about security, compliance and integration issues, nor do they trouble themselves with the politics driving the IT organization choices towards a specific solution versus another. A LoB only wants to get the job done within the deadline. And if the corporate policies get in the way, they will be often circumvented. In turn, if the corporate policies get circumvented and the tools that empower a LoB are provided by external cloud service providers, in the long term the role of the IT organisation will become less relevant. To stay relevant in the eyes of upcoming generations, both vendors and their clients must recognise the ongoing transformation, anticipate the upcoming demand, and adapt.
It’s great to see how some vendors are starting to realise the need for Frictionless IT. For example, during the last week’s Red Hat Summit 2015, our long term partner SAP demonstrated a growing awareness about the need for simplicity.
On our side, we are working to deliver the most frictionless products that the open source communities, supported by Red Hat’s expertise and vision, can offer. We have a long way to go, but we are confident that this is the right path to walk. Stay tuned for more on this front.
When Apple announced the new Magic Keyboard for the iPad Pro, I decided to pre-order one and try it. Being a MacBook Pro 15” user, I tend to avoid travelling with the corporate laptop due to the size and weight. My iPad Pro 12.9” is fantastic (the best iPad I ever had, in fact) for travelling but when it comes down to writing long pieces or taking notes during meetings, the digital keyboard is too slow. There are a lot of 3rd party keyboards on the market but I dislike all of them as they wrap the iPad like a hardcover, preventing a quick removal, and, quite frankly, they look ugly to me.
The Magic Keyboard doesn’t enclose the iPad, allowing a quick attach/detach operation (in certain conditions) and not hiding the hardware design. Good enough reasons for me to try the product.
After mentioning on social media that I purchased the keyboard, I have received a few requests for a full review, so here we are.
By now, the tech press has published dozens of reviews about this new keyboard. If you haven’t read any, here are some good ones: DaringFireball, MacStories, and TechCrunch. Most of them are written by great journalists/bloggers who travel and write a lot. For the most part, I won’t repeat the things they already said about the keyboard.
At this point, I used the keyboard for one week, which is enough to share some first impressions, but I’ll edit this review if and when new aspects worth mentioning arise:
- It’s a wonderful stand and a terrible cover
- It’s not as heavy as a rock
- Typing is superb but noisy
- The trackpad is fast, smooth, accurate, and really noisy
- App support is inconsistent
- It can drain the battery quickly
- Yet, it’s a must-buy
It’s a wonderful stand and a terrible cover
When I first saw the pictures of the Magic Keyboard for iPad Pro online, I immediately thought that it was designed to be a permanent stand more than a portable cover. Something you would deploy, for example, on your shop counter to use the iPad as a cash register (a bit like Square does).
I can confirm that feeling after in-person use. In its fully open position, the keyboard is the sturdiest stand I ever used. You can move the iPad and keyboard around, just holding the latter, without any fear of detachment. You can place the duo on uneven or soft surfaces like a sofa and still have an exceptionally stable mobile computing station.
Even if you don’t use the keyboard to type, it is the best stand I have tried so far to use the iPad as a secondary monitor (via Apple Sidecar if you have macOS Catalina, Duet, or something else). The reason is that its unique design elevates the iPad at the same height of my MBP 15” screen, making it easier to keep the eyes focused on the top part of the screen as I usually do.
However, if the keyboard is closed around the iPad like a cover, and you want to barely open it just to extract the iPad, you are in for 10-40 seconds (depending on your level of clumsiness) of absolute frustration. The magnets of the keyboard are so powerful that, first, you have to open it with two hands and, second, you have to apply brute force to attempt to remove the iPad from the opening angle. An operation that would take a millisecond with an Apple Smart Cover is close to impossible with the Magic Keyboard. To the point that the fastest way to remove the iPad is to fully open the keyboard in its “stand position”, as I call it, and only then take the device out.
To make this even more explicit, here’s an example. Let’s say that you are at the airport waiting for boarding and you have your iPad with the Magic Keyboard under your arm. If you had the Smart Cover and wanted to check something very quickly, you’d simply flip open the Smart Cover and use the device. With the Magic Keyboard, this is impossible.
It makes me feel like it’s better to permanently leave the keyboard in its “stand position” where I normally work and attach/detach the iPad when I start/end my working activities.
It’s not as heavy as a rock
Regardless of the actual weight expressed in numbers, almost all reviews I read gave me the impression that the keyboard (or, better, the combination of the keyboard and the iPad) was as heavy as a rock. The duo is heavy but not outrageously so, and still a lighter (and more flexible) option than carrying around my MBP 15” (I realize that it’s not an apple-to-apple comparison). Also, as you’ll read in other reviews, the Magic Keyboard plus the iPad combination is just 50g heavier than other combinations with 3rd party keyboards.
The bottom line is that when you close the keyboard around the iPad, and you carry the duo around, it doesn’t feel like an unbearable burden. And it certainly doesn’t look clunkier than some other 3rd party keyboards I have seen.
Typing is superb but noisy
Typing on this keyboard is a great experience. Significantly better than typing on my MBP and its terrible butterfly keyboard. While the keyboard is spacious, it took 15 minutes to get adjusted to the slightly different space between the keys but, after that, typing was extremely fast, extremely reliable, and extremely satisfactory.
I read many negative comments about the lack of an F-keys row. I never use them, so I didn’t miss their presence. I have similar feelings for the lack of an ESC key (there’s a workaround for it anyway).
That said, the keyboard is noisy. And I am not talking about the noise that a heavy typer makes while he/she channels anger or energy during a conference call where he/she is supposed to be on mute. This keyboard is noisy even if you type softly.
The noise is very satisfying but can be very distracting for somebody that is near you while you type. I can’t imagine what would happen in a face to face meeting where ten people are all typing at the same table.
The trackpad is fast, smooth, accurate, and really noisy
The trackpad is significantly smaller than the gigantic surface offered by my MBP 15”. Yet, it is very fast (so fast that I had to reduce its default speed in the settings), absolutely accurate, and produces a scrolling that is as smooth as the built-in trackpads. Not for a second, you realize that the iPad and the trackpad (or the keyboard) are not part of a single system.
All gestures you normally use to navigate the iPad work fine with the trackpad, and the response time is instantaneous. For some reason, the gesture-based navigation on the Magic Keyboard seems even more seamless and natural than on the MacBook.
I couldn’t find a way to reproduce the single finger sideswipe in certain apps (like in RSS newsreaders) through the trackpad, so for that task, I still touch the screen.
But boy, this trackpad is noisy. Way noisier than the keyboard itself. I can’t imagine anybody in a meeting room not hearing when you click on this trackpad. Maybe I got so used to fixed trackpads on the MacBook Pro that I forgot how noisy a regular trackpad is. Whatever the reason, it’s impossible not to notice the noise generated by this keyboard. I am concerned that the sound of both keyboard and trackpad would be highly distracting during a conference call (thankfully, a growing number of online meeting solutions are starting to adopt artificial intelligence to filter out unnecessary sounds like clicks and background noises. Krisp, Discord, Microsoft, Google are the first that come to mind).
You can mitigate the problem by enabling both “Tap to click” and “Two-finger Secondary Click” in the Setting, under General > Trackpad. That’s the way I set up my MBP, but for some reason, it’s less natural to do the same on the Magic Keyboard.
App support is inconsistent
While you use the trackpad to navigate the iPad OS interface, moving across icons or UI elements in native Apple apps, everything works fine. The pointer changes shape depending on the UI element it lands on, and the whole experience is delightful.
However, try to use the Google Docs for iOS app, as I am doing to write this review, and the experience becomes way more frustrating. The pointer doesn’t change shape any more, it’s not even visible until you double click on the page to start writing and, most importantly, selecting and copying (or hyperlinking) a portion of the text is a very frustrating experience that requires multiple attempts.
The solution is to delete the app and use the web version of Google Docs from inside Safari. It offers much better pointer support, the double and triple click correctly selects words and entire sentences, and everything is high fidelity just like in macOS.
In another situation, switching from Facebook Messenger for iOS to another app and back caused the keyboard to stop working inside Messenger. I had to kill Messenger and relaunch it to have the keyboard work again.
Hopefully, this is a temporary problem. Should this keyboard become as popular among iPad professional users as I expect it to be, Google and other players will have all incentives to improve their native app support quickly.
Unrelated to the pointer behaviour but pertaining 3rd party app support in general, the Magic Keyboard design forces the iPad in landscape mode. While this is fine in most situations, in some rare circumstances, you might find yourself using iPhone-only apps (like Instagram). Those apps, while scaling up as usual to leverage the iPad display screen estate, will not rotate to match the portrait orientation.
It can drain the battery quickly
The keyboard seems to drain the iPad battery very fast. I will leave the scientific tests to people way more competent than me. What I know is that writing this article took me 2.5 hours, during which my battery went from 60% to 35%.
Maybe a 3rd party keyboard would have drained the battery as much, but for sure the iPad digital keyboard wouldn’t have. It’s also possible that the problem is related to the native Google Docs app for iOS I used, rather than the web version through Safari.
Speaking of battery consumption, using the iPad with the Magic Keyboard as a secondary monitor via Sidecar for 8 hours consumes a 100% charged battery only to 50%.
Yet, it’s a must-buy
The iPad has been around for a decade. In these ten years, I didn’t see or try a single keyboard that doesn’t look and feel a compromise in terms of design and experience. While the Magic Keyboard for iPad Pro not perfect (it’s the first iteration, after all), I consider it a must-have purchase if you plan to use the iPad for work. I particularly recommend it if you travel much and want the freedom and flexibility to switch from reading/watching to heavy-duty writing.
I’m a former industry analyst. In that role, I worked with hundreds of startups and dozens of large, established vendors, and I can honestly say that many of them don’t get analyst relations (AR) right. Hopefully, this document will serve as a guide to rethink the value of analysts and how the AR programs are developed.
AR can go wrong for two main reasons:
In most cases, vendors think about industry analysts as market influencers, strongly opinionated and equipped with a big megaphone, capable of reaching, without particular merit other than the established brand they work for, those organizations that could become customers.
Nothing could be further from the truth.
The aforementioned perception comes from a strong cognitive bias rather than a true understanding of what analysts do*. In fact, the majority of vendors I worked with completely ignore how industry analysts develop their strong opinions, and what’s the true value they have to offer.
The reality is that analysts develop their opinions after hearing tens/hundreds of organizations per year (it depends on how big the firm is) at an individual level, and hundreds/thousands of organizations per year collectively, discussing their interactions within the analyst community (if you think that your threads in the corporate mailing list are long, think again).
Here lies the true value of an industry analyst: being a collector of the overall market sentiment towards technology, a product, or a vendor. His/her position is moulded by what customers keep reporting in analyst enquiries**.
Listening to what an analyst has to say, and asking the right questions about his/her audience, a vendor can get invaluable feedback about that portion of the market that cannot be reached yet (and sometimes also the one already acquired). That feedback is precious and second only to the feedback that the sales organization collects in the field.
Vendors that don’t get analyst relations right, usually fall in one of the following three camps, sometimes shifting from one camp to another as their exposure to the analyst community increases:
Ignoring the pundits
This is the category of vendors that perceive analysts in the worst possible way and tend to have zero or minimal AR. In my career, I heard analysts being referred to as pundits that spend all their time theorizing from their ivory tower, completely out of touch with what the market really wants.
Part of this belief comes from the fact that industry analysts are mostly known for their predictions. The data points they collect in their interactions with a multitude of organizations worldwide, and the patterns that they identify in the ocean of data they collect, are often turned into trend analysis and the subsequent generation of a series of predictions.
However, like any professional in any industry, analysts do mistakes and can go, sometimes, horribly wrong.
It’s a fair expectation that analysts don’t do mistakes in their prediction, given the impression that predicting trends is their main job. And every time a prediction is wrong, faith in the analyst community gets seriously tested.
Vendors in this category fail to realise that there’s a much more reliable part of the analyst job, the data gathering and interpretation, which is where the most value can be extracted from.
This state of mind usually lasts until the first few prospects counter the sales or marketing statements with some research published by one or more analysis firm. At that point, this category of vendors has no choice but to move to the next camp.
Dealing with the necessary evil
This is the category of vendors that believe analysts have nothing particularly valuable to offer but given their enormous influencing power, they must be dealt with.
Part of this belief comes from the assumption that the vendor itself talks to organizations as much as the analysts do, and so every feedback that must be captured will be captured. Of course, this is not the case.
First of all, organizations that talk to analysts are infinitely more candid about technologies, products and vendors than they would ever be with the vendor itself. Not only the nature of the relationship entices a straightforward conversation, but that relationship is also protected by a confidentiality agreement. There’s no confidentiality agreement between vendors and their customers or prospects.
Second, industry analysts almost always talk to many more organizations than the vendor possibly could. Individually or collectively, the analyst community gets pinged by or proactively reaches out to organizations worldwide, regardless of industry, geography, corporate size, function, persona, etc., building a far more comprehensive and structured view of the world than the one any vendor out there ever could. Even the biggest vendors, with sizable sales organizations, which in theory could collect many more data points, in practice cannot rival with a professional analysis firm, as the former is not trained and organized to collect and interpret data like the latter.
The vendors that fall into this category perceive AR as a waste of time that could be dedicated to something else, like marketing activities, without realising that analysts can be the most valuable feedback mechanism available to understand why the companies that are not their customers.
Sometimes, champions of the analyst community within the vendors in this category can help them shift to the next camp.
Confirming a bias
This is the category of vendors that actually believe analysts have a value, just the wrong one: supporting with quotes or written research whichever position is the most convenient to the vendor at any given time. In cognitive psychology, this is called confirmation bias.
The vendors that fall into this category will actively seek (or even request to produce) any bit of information that can be opportunistically placed in a presentation, marketing brochure or press announcement. Anything else, especially adversarial evidence, will be systematically ignored.
Selectively, the analysts will be trusted or not trusted according to how aligned their position is to the vendor’s strategy. If one analysis firm is not confirming the bias, the vendor will seek an alternative opinion in competing firms until the bias is fully validated.
The implications of this approach can be devastating. First, it’s impossible to find the truth: no matter what’s the vendor position, given enough time, some evidence to support it will come up. Second, a continual reinforcement of a bias progressively pushes vendors out of touch from what the market is really asking for (i.e., Drinking the proverbial Kool-Aid).
Regardless of what camp a vendor belongs to, there’s a series of common mistakes that most vendors do and that should be avoided as bad AR practice:
- Pitch the analysts in same way customers are pitched
- Pitch the analysts with too many / the wrong representatives
- Not understanding how to get analyst momentum
- Brief and forget
- Not studying the analysts
- Not having a mechanism to process and incorporate feedback
- Not understanding what’s the goal of the analyst
Pitch the analysts in same way customers are pitched
The most typical mistake a vendor does with analysts is pitching them in the same identical way end-user organizations are pitched. End users and analysts are completely different audiences that look for completely different sets of information in the interaction with the vendor.
End-users’ top priority is understanding:
- if a vendor can solve their problem,
- how reliable a vendor is as a business partner,
- what products and features the vendor offers to solve that problem,
- what’s the effort required to implement them,
- how much they cost.
Analysts’ top priority is understanding:
- how the vendor differentiates itself from its competitors in solving a customer problem,
- how many (and what kind of) customers are already using its solutions in production,
- how healthy the business is,
- how much the ecosystem around the products is growing,
- what’s the vision behind all of that,
- what the strategy to execute that vision is.
As you can see, the two audiences have dramatically different priorities, and yet too many vendors start their briefing by explaining to the analysts what’s the problem that they are trying to solve.
Unfortunately, almost always, the analysts know the problem much better than the vendor, for no other reason than the analysts talk to very many customers that have that exact problem. And customers that have a problem are acutely aware of the problem.
Moreover, and more amusingly, all vendors that compete to solve a certain problem describe that problem in the same identical way, because unless they got it completely wrong, there’s not much creativity you can (or should) put in describing the problem. So a vendor that dedicates half of the analyst deck describing the problem has unknowingly produced a presentation exactly identical to the same one of all its competitors; to the point that nobody would notice a difference if the logos would be swapped between decks.
Of course, analysts want to know about the first five bullet points, but if they did their homework they already have that information, at least at a high level, and their interaction with the vendors should focus on that for the shortest amount of time possible.
Pitch the analysts with too many / the wrong representatives
Another common mistake, only affecting large vendors, is allowing too many company representatives to speak to analysts. The more a vendor grows, the harder it gets to keep every employee on the same page and be sure that the company message is divulged clearly and consistently.
Nothing compromises the analyst’s trust in a vendor more than lack of coordination and cohesion across the many departments of the organization.
Sometimes there’s no fragmented vision of the world, but some employees can’t resist in putting a personal spin to that vision, impacting the analyst perception in a negative way. Other times, the fragmented vision of the world is an actual issue and it becomes more obvious as more employees talk to the analysts.
Not only it’s critical that a vendor keeps the message coherent in all interactions with the analyst community, but it’s also critical that those interactions are optimized to put the analysts in front of whoever has the broadest understanding of the vendor’s strategy and is the best at communicating it.
Not understanding how to get analyst momentum
Analyst momentum comes from customer adoption, not marketing effort.
After analysts have acquired the information detailed in the previous section, their second-highest priority is talking to customers. That activity is vital to independently verify the marketing claims and assess first-hand how technology actually performs in production, what are the challenges to make it work if there’s any change of mind after the initial commitment, and any other precious data point that a vendor wouldn’t normally share.
Some vendors don’t put enough effort into providing customer references (which can be exceptionally hard to obtain, in all fairness), or enough details about each customer reference. Unfortunately, a customer logo on a slide is not particularly impressive to the eyes of an analyst because he/she sees a similar slide from pretty much every other competitor, sometimes the very same day. The so-called Nascar slide is more about feeding the ego of the vendor than helping the analysts do their job. Analysts have to go deeper and understand if a vendor’s technology has been implemented in production rather than being just a proof of concept, how extended the implementation is (i.e., corporate-wide vs. departmental), and so on.
Other vendors mistakenly assume that appearing in a research paper is an indicator of momentum within the analyst community. That’s not the case. An analysis firm featuring a vendor in a research paper may have very many reasons to do so, including being comprehensive for the sake of accuracy. Being mentioned in a paper doesn’t equal to being endorsed unless it’s explicitly so.
Hence, the marketing efforts a vendor dedicates to get noticed are not even remotely as valuable as the efforts to constantly provide new and detailed customer reference stories to the analyst community.
Not studying the analysts
Another typical vendor mistake in AR is coming to the meeting with the analyst completely unprepared. Like any seasoned sales professional will confirm, knowing your interlocutor is critical and helps you get the most out of the interaction.
Preparation is not just about having a full understanding of what is the area of coverage of an analyst, and what is his/her recent research all about. Preparation is also about reading those research papers, and watching the videos of his/her presentations on stage (if there are any), and reviewing his/her social media footprint (if there is any). All of this is beneficial to understanding where the analyst’s mind really is and how much his/her thinking aligns with the perspective of the vendors.
In fact, not every analyst is straightforward in communicating impressions about the vendor, especially during the briefing with them. In that context, multiple social dynamics are at play and it can be hard for an analyst to express exactly how he/she feels about a vendor or their products. The assumption that an analyst likes the vendor just because he/she has been very friendly and polite during the meeting is flawed.
Additionally, during a meeting, an analyst may not have a fully formed opinion about what’s hearing/viewing and may need more time to digest the information and come to a conclusion. That opinion further shapes up as the analyst talks to the vendor’s customers, competitors, competitors’ customers, other analysts, he/she has hands-on sessions with the products, and so on.
The only way to fully understand what analysts think about a vendor and its offering is for the latter to ask straightforward questions, keep asking, and observing all the ways analysts express their opinions about the industry.
Given that the analyst community is quite big, this is a lot of work and possibly the hardest part of any AR program. The more a vendor invests in this, the more rewarding the AR program will be.
Brief and forget
Many vendors assume that after the analyst has been briefed once, he/she will remember forever, and everything about, the vendor. That’s not the case. A really good analyst that is in demand can easily be briefed by 200-300 vendors per year. Sometimes, multiple competitors on the same day.
As much as a good analyst tries to stay on top of his/her game, it’s just too much information to be retained. Sometimes vendor presentations, especially the non-memorable ones, get archived and forgotten forever. Also, vendors can change their strategy often if they are struggling to get market traction or must react to disruptors entering their space, and those frequent changes are not always captured by the analyst community without proper guidance.
All of this means that vendors need to work really hard to help the analyst community in staying up to date with what’s happening on their side of the house.
In my career, I heard more than once vendors complaining that AR is too much effort and that analysts don’t do enough on their own. The reality is that an analysis firm is usually very, very resource-constrained and infinite fewer capabilities to keep up with the evolution of the entire IT industry than anyone could ever imagine.
Ultimately, it’s in the best interest of the vendor to be sure that the analyst community has all the information available to develop a well-informed opinion. Being front and centre in an analyst mind should be an imperative for any AR program.
Not having a mechanism to process and incorporate feedback
Those vendors that fall in one of the three camps described at the beginning of this guide have no genuine interest in the feedback that analysts can provide during an interaction.
Even if an analyst has expressed a valuable and constructive criticism, most of the times, it doesn’t get registered, processed and incorporated in a structured way. It’s up to single individuals in the meeting room and their personal skills to capture the feedback and leverage it in the most appropriate way across the complex business dynamics that regulate the vendor’s activities. Which leads, in most cases, to no follow-up action or even discussion about the feedback.
The analyst feedback comes, once again, from his/her perception of the market as it’s shaped by the multitude of conversations with the end-user organizations. Given this, that feedback should be considered worthy of a debriefing and a serious reconsideration of the choices the vendor made up to that point.
I’m not suggesting that the analyst feedback should get the highest priority and be incorporated at all costs. A vendor hopefully has a clear vision of what’s trying to accomplish and not everything is shared with the analysts, so sometimes the latter may lack the context necessary to understand why certain recommendations won’t be implemented as suggested. Nonetheless, the analyst represents the customers and as such the feedback he/she provides should be formally reviewed.
Not understanding what’s the goal of the analyst
The ultimate goal of an analyst is not to be right or to call the vendor’s baby ugly. The ultimate goal of an analyst, or at least a good analyst, is to protect end-users’ interests, to stand for the customers.
Most analysts really want to make the world a better place, in their own way. If they perceive that vendors are not listening to what the market wants, they criticize whatever is necessary to criticize, until the optimal solution for the market needs is released.
In theory, vendors should want the same thing as, needless to say, offering the market what the market wants translates into more business. Hence, rather than perceiving the analysts as opponents, and work on them, vendors should consider analysts as powerful allies and work with them towards success.
Appreciating all of this and educating employees about it is critical to the success of the AR program.
Some of them have a page on their website dedicated to clarifying how their business model works, but that page is either insufficient, complicated to understand, buried deep down the website, or all of these things.
Analysis firms have vast sales organizations that, in theory, should spend time explaining in details how analysts work. In practice, most of the sales force is too busy explaining the intricacies of the contracts to have the time for a crash course.
The assumption that vendors educate themselves about analysts is fundamentally flawed and the analysis firms should reconsider their approach in this aspect.
**An analyst, like any other individual on the planet, has a pre-existing bias that most likely will influence his/her interpretation of what customers are saying. However, three things must be considered:
- Analysts get hired based on a number of key requirements. One of them is the capability to listen. Another is the capability to be less subject to strong cognitive biases.
- Even the strongest bias eventually gets softened or even radically changed by the continual exposure to data (as in all the hundreds of interactions with customers).
- Very strong biases unsupported by data are easy to spot for customers, who always seek the most professional and unbiased perspective possible. Those few analysts that show too much bias in any direction can easily stop being in demand.
After an entire year using the iPhone 6s and the release of iOS 10, I decided to switch to Android and buy a Xiaomi Mi Mix. A lot of people have asked for my impressions, so this is a report of my experience after two weeks of use.
- Thoughts on Mi Mix
- Thoughts on Android
I have been a long-standing iPhone user and have owned almost every model Apple released, except the original one, and maybe a couple of “s” versions. So, first of all, I think it’s worth explaining why I switched to Android.
There are four main reasons:
- The iPhone 6s is the worst iPhone I have ever owned.
The screen scratched incredibly easily compared to previous models, to the point that I have a hard time reading when light reflects on it at certain angles. Also, the battery life is disappointing beyond reasonable, and much worse than the 6 model, to the point that it would shut down the phone at random battery levels (for example, 12%, 21% and even 61%). Yes, as it turns out, I am one of those customers eligible for a battery replacement. Last but not least, iOS has become less and less stable with age, crashing and rebooting way more often that Apple users are accustomed to.
- With iOS 10, Apple changed the way notifications are displayed, removing the opportunity to group them together per app and clear them in chunks. I literally depend on notifications to do a quick triage of the hundreds of messages and alerts that I receive on daily basis and the new approach forces me to either clear notifications one by one or all of them together. The former method is too time-consuming, the latter is not granular enough.
- I really can’t stand the extruding camera that started to appear in the iPhone 6. I understand that Apple doesn’t want to compromise on the camera quality and the weight and volume of the device, but that bulge completely ruins the aesthetics of the current generation iPhones, and it’s one of the reasons why I always disliked the early generations of Android phones.
- I really don’t appreciate the increasingly rounded shape of the iPhone 6/6s/7 generation. Not only do I not like it aesthetically, but it’s also getting increasingly slippery to the point that I drop the device way too often. I waited before switching OS to see if Apple would change shape with the iPhone 7, but they didn’t and I was not inclined to stand all the issues at #1 and #2 for another year, at the least.
So, given that, in my mind, Apple is declining in terms of hardware and software quality, and the notification system has become disappointing, there was no reason anymore for me to not consider Android.
The fact that I switched is deeply concerning. Not because I want to stay with Apple at all costs. I have no bias; I just want the best possible smartphone the market has to offer. The broader implications of my move is what is really troubling me.
I am a very vocal proponent of a frictionless user experience, both in consumer and enterprise IT. If Apple loses its edge on this, as I perceive they are at the moment, I don’t have the confidence that anybody else will remain to set the bar and push the whole industry forward. If Android is so good today, enough for me to jump ship, a significant part of the merit goes to Apple which has a model to emulate and surpass. If Apple loses its edge, I am very afraid we’ll lose the capability to innovate at the current speed and we’ll fall back into the dark ages of mobile computing that we experienced before the iPhone was launched.
Thoughts on Mi Mix
As I had decided to make the change and given that I can’t stand the increasingly rounded shape of my iPhone 6s (and so many Android clones on the market these days), I looked for devices designed in more squared form factors.
I originally wanted to try the Samsung Galaxy Note 7, even if I hate the Samsung logo on the front. Ironically, I couldn’t buy one because of its exploding battery. While waiting for a non-exploding version from Samsung, Xiaomi announced the Mi Mix, which has the ideal form factor for me, so I decided to consider this platform switch as a full blown experiment and go for what the company called their “conceptual phone”.
First of all, I had to wait for some reviews to be published, to be reassured that the device is working more than decently. The large majority of my work is done on the smartphone, on the road most often than not, so a reliable device with a large display and long-lasting battery are mission-critical requirements.
I watched very many video reviews, from the most authoritative sources to the least known independent bloggers, and it was entertaining to see how all of them emphasised some aspects and downplayed others in a completely opposite fashion to what I will do below.
Secondly, I had to be sure I could actually get one. I live in London and Xiaomi doesn’t officially sell anywhere but in China. However, I read that GearBest established itself over the years as a reliable importer for Xiaomi phones in the UK, so I decided to pre-order the Mi Mix with 6GB RAM and 256GB ROM.
So how did it go so far?
Less slippery, more slippery
If you come from an iPhone 6s and you have small hands, like me, the phone feels huge. However, the squared edges make it way less slippery than the iPhone 6 with its rounded edges. The number of times I lost grip on my iPhone is countless, despite it being much smaller in size and weight.
On the other hand, in a very different way, the Mi Mix is more slippery than the iPhone. Its ceramic body is way smoother than the brushed metal of the iPhone, and I saw it sliding dangerously in multiple occasions because of this. The most epic one was when I simply placed the phone on a table at a restaurant. The table was so minimally uneven that you couldn’t tell with your naked eyes. While I was talking, the phone silently slipped through the barely inclined surface, all the way to the ground. Given that I hate covers (and the GearBest package didn’t include one anyway), I guess I’ll have to be mindful of where I place my phone from now on.
Tougher than expected
Reading and watching reviews, I had formed the impression that the phone was incredibly delicate and prone to cracking, mainly due to the display. It’s not the case, at least so far.
Xiaomi did an engineering miracle in packing a 6.4” screen in an almost bezel-less chassis, which has exactly the same size of an iPhone 6/6s/7 Plus with a cover (and even without the cover the difference is minimal). It’s a marvel to watch but it also means that there’s literally nothing protecting the glass against hits. I also read that Xiaomi didn’t use Gorilla Glass or other strengthening manufacturing processes, so I was especially concerned about breaking it at the first fall. Nonetheless, when my phone decided to slide off the table and fall flat on its screen, it didn’t even get a single scratch.
The same can be said for the ceramic body. It didn’t scratch in any way so far, and, differently from the fabled iPhone 7 Jet Black, the shiny blackness of the Mi Mix maintains its integrity like the day I unboxed it.
Great but heavy battery
Coming from the terrible iPhone 6s experience, this literally is another planet. Even after disabling all the MIUI battery optimization settings, and despite the intense use due to my learning exploration of the first few days, the battery lasts well beyond a full day. On top of this, the battery also recharges really fast; I can’t quantify exactly but it definitely feels much faster than the iPhone.
Such an amazing battery (4400mAh) comes with a downside: the weight. The Mi Mix is really heavy, which I can deal with only because the battery is a top priority to me, but it certainly makes holding the device for long periods uncomfortable.
Stunning but challenging display
The display is both huge and beautiful, as extensively documented online. So I’ll focus on two aspects that reviewers didn’t mention at all: how the additional screen real estate is used, and how the bezel-less screen impacts usability.
First of all, in my opinion, the additional real estate is not fully leveraged by default. For example, the Xiaomi MIUI 8 customization doesn’t allow you to resize the icons so to have more of them on the screen at the same time. It only makes everything look bigger. In my understanding, the icon grid resizing is a feature available natively in Android starting with Nougat (Android 7.0), which Xiaomi is not pushing yet on its phones. However, third-party apps already allow you to change that aspect of the OS today, so I don’t see why the MIUI 8 didn’t offer the capability as well. While waiting for the availability of their flavour of Nougat, I opted for installing a popular third party app called Nova Launcher. Thanks to Nova, I could setup a grid of 5 columns by 7 rows of icons, which saves me from swiping through two screens or resorting to app folders in the first screen. Nova can do so much more than that, but I need none of those other capabilities. I would very much prefer to have the icon resizing feature native in the OS to minimize the impact on battery life.
To further leverage the enormous display, I reduced the font size from normal to small for both the OS and as many installed apps as I could. In this way, I maintained the dimensions I was accustomed to on the iPhone 6s and had a lot more things on the screen.
Then there are the challenges that a bezel-less display poses.
When everything is just display, the amount of involuntary touching grows exponentially, and there’s very little the user can do to minimize it (there’s practically no free space to firm the grip). I noticed that this is especially true with the top corners of the display (the right one in my case, as I am right-handed). MIUI 8 doesn’t seem to do anything particular to avoid accidental touches at the edges of the display, but given that Xiaomi is pioneering a new form factor here and they cautiously called the Mi Mix a conceptual phone, it’s ok if this aspect is not ironed out yet.
Ultimately, the screen is so big in a relatively small form factor that it completely eliminates, at least for me, the need for a tablet. A tablet is easier to hold, but a tablet is not a device that you always have with you, differently from the smartphone. On top of this, iPad apps never reached the level of usability that you experience with desktop counterparts, so, despite being a lifelong believer in tablets, I saw less and less value in them over time, and increasingly reduced their size. I moved for the original iPad to the Mini, and now, with the Mi Mix, I don’t think I need a tablet anymore. However, I would consider a Mi Mix with slightly smaller display sizes, like 5.8” or 5.5”, if that would result in a better balance between size, weight and screen real estate.
A shameful camera
Reviewers unanimously suggested that the Mi Mix camera is sub par the industry standard for high-end phones. That is an understatement. The camera is terrible.
Having a phone that could rival with the iPhone in camera quality is not critical to me. But here we are talking about a camera that captures colours that are not there, and a very disappointing resolution.
Granted, in perfect lights conditions, the Mi Mix produces decent photos, but in the real world there are never perfect light conditions. The picture above was taken in normal daylight conditions inside a house. If the camera cannot perform properly in this typical condition, it’s a useless camera. And I took plenty of other pictures, where the camera simply didn’t render the colours properly and the photo resolution was underwhelming.
It’s not a complete deal breaker for me, but coming from Apple I am hugely disappointed. I primarily take pictures of artworks at exhibitions in galleries and museums around the world, most of the times capturing pieces that I’ll never have a chance to see in person again for the rest of my life. So every shot is beyond precious. Knowing that I can no longer count on my smartphone for this, after years of great iPhone shots, is incredibly annoying. I can’t imagine having missed all the moments I captured over the years thanks to my smartphone, so, given that the Mi Mix produces photos of unacceptable quality, I already know that I’ll have to start to carry around an additional device just for that. I also know that, because of this single issue, I’ll replace the Mi Mix with something else (maybe a better version from Xiaomi) as soon as it comes out. The days of this device are numbered.
The camera is the one unforgivable flaw of an otherwise amazing phone. And it’s ironic because the extrusion I complain so much about on the iPhone is possibly what makes its camera great.
Metallic phone calls
As you probably heard, another engineering feat of the Mi Mix is the lack of a traditional speaker for audio calls. The whole upper part of the ceramic body vibrates during a phone call and that’s how you hear the sound.
The result is a sound that is very different from what we are accustomed to in modern phones. Voices are clear but a bit far, and kind of metallic. You have the feeling that the sound is coming out of the back of the phone, where they camera and fingerprint reader are, rather than from the front where the display is. And in fact, you can have your phone call by placing the ear on the back, rather than on the front of the device.
It’s usable, just not ideal, especially in a noisy environment. It’s not a deal breaker for me because I rely on Bluetooth headphones. If I can keep them working, I am all set.
By the way, despite the whole ceramic body vibrates to emit sound, bystanders can’t hear the phone call if in close proximity, or at least not more than with any other smartphone on the market.
Thoughts on Android
In 2013 I bought a Nexus 7 tablet. At that time I had a very bad impression of the OS due to the slowness of the device, the lack of mainstream applications and the poor design. But Android has come a long way in just three years.
Before switching, I did extensive research on the Google Play Store to understand if there was an Android counterpart for all my iOS apps, or at least the ones in my first screen, which I use on daily basis. I was very pleased to see that 23 out of 25 of my iOS critical apps were there, and quite a few more in the other my two screens. Without app parity, I would have never had the confidence to switch OS.
As soon as the phone arrived, equipped with Android 6.0 Marshmallow and Xiaomi MIUI 8 customization, I dedicated five full days to understand how the OS behaves compared to Apple’s counterpart, and how to change the things I didn’t like about it.
So how did it go so far?
Android has no specific merit in this, but thanks to the aforementioned app parity, the fact that nowadays almost every mobile app has a cloud back-end, and the web installation, I could migrate my entire application portfolio to Android in literally five minutes. I never preferred Apple technologies when I was on iOS, so I didn’t have any issue abandoning iMessage (which I used only with a couple of contacts), Apple Maps (which I ditched years ago for Citymapper and Google Maps), Safari (which I don’t mind replacing with Chrome given that it’s my browser of choice on the desktop), or even the iOS keyboard (which I replaced with the amazing Gboard as soon as it was released).
The only native iPhone app I really relied on is Apple Wallet, but I found Pass2U Wallet to be an adequate replacement.
So far I noticed that the design and functionalities of the apps are almost identical between iOS and Android. In some cases, Android versions perform slightly better and/or have more features. In other cases, they perform slightly worse. Sometimes the same app has different bugs depending on the platform. At the moment of writing, for example, Google Snapseed has a long-standing massive bug in its Healing feature on iOS that I am not experiencing on Android; and a significant bug in its Rotation feature on Android that I never experienced on iOS.
The few apps that I couldn’t find on the Google Play Store were replaced by alternatives that I spent time researching. Some of them are really good apps, well designed, and with even more features than iOS apps I had to leave behind. aCalendar replaced Informant, FeedMe replaced Newsify, etc.
Overall, I am really pleased with how the Android ecosystem has grown and matured compared to my first experience.
Frictionless web installation
More often than not, I discover new apps thanks to press articles, which I read throughout the day on my computer. When I find one that I want to try, I am naturally inclined to click on the link to learn more about it and maybe read some reviews before installing it. When you are an iOS user, you end up landing on the App Store web page which has a link to open iTunes to install the app (and then, through a wireless sync setting, you ultimately have it on your iPhone). All of this is an unnecessarily convoluted process which I always refused to go through.
First of all, the last time I used iTunes was probably five years ago. I find it useless and confusing, and I don’t see why I still have to depend on a binary application as the central hub for my apps and media in the era of cloud computing. So, every single time, I am forced to search the app I am interested in right on my iPhone and install it from there. Sometimes it’s easy, other times it’s not (for example, if the App Store search engine didn’t index the new app yet).
Google Play Store allows me to install an app right from its web page. I click install, it asks me for authentication, I specify what device I want the app installed onto, and done.
Even better than this, if I start using the app and I don’t like it, deleting it after a few moments, Android recognizes it and starts an automatic refund procedure without me doing a single thing. I just receive an email saying that I was reimbursed for whatever I spent.
This is a frictionless experience and I honestly don’t understand why Apple can’t do something as simple as this, given the billion apps that get installed every day worldwide.
Pragmatic wireless connectivity
There are other small things in Android that are making my life easier. One of them, which I always wanted in iOS, is the capability to chose the wireless network without digging deep into the settings. I am on the road incredibly often and I have to connect to new wireless networks more often that I’d like. With Android, I can do that right from the notification shade, which is just a gesture away. I don’t understand why Apple could create convoluted, secret handshakes to accomplish all sorts of gimmicks on iMessage for iOS 10 but couldn’t address such a simple thing.
Almost no phone manufacturer loads into its phones the vanilla (aka stock) version of Android. Most of them customize Android in significant ways to introduce innovative capabilities, leverage special hardware their phones feature, or track the usage of the devices. Even Google started to do so with its new iPhone clone, the Pixel. If you are familiar with Linux, you can consider each of these customizations like an Android distribution.
I don’t know how much these Android distributions can negatively impact the overall experience compared to stock Android. I would need to try a few different brands, for a very long period to mature that kind of knowledge. Before switching, I read a lot about how the MIUI and how Xiaomi customers have a love-hate relationship with it. But I also read similar comments from customers using Android distributions from Huawei, Lg, Samsung and other vendors.
What I know is that some things about MIUI are great for me, like the capability to have dual versions of the same app (useful when the app doesn’t natively support multiple accounts) or the capability to lock app access (which is a something too few apps support natively). Others, however, are terrible, and are part of the reasons why I was forced to learn how to customize Android in all the ways I am doing.
For example, Xiaomi decided to assign an ugly background (white, black or blue) to every icon of every app you install. In my opinion, app icons look much better on Android than iOS thanks to the transparent background, but even if you have a different view, the Xiaomi implementation is really terrible. That may seem minor, but it really makes a difference for somebody who cares about design like me and uses the phone constantly. I was forced to research how to customize the icons and install an icon pack. I never thought in my life I would have to do this sort of vanity trickery.
MIUI gets in the way in other, more profound ways as well. For example, by default, it has enabled very aggressive battery saving mechanisms that completely disrupt mission-critical processes like notification delivery. So I had to research why I didn’t receive all notifications I was supposed to, and then I needed to learn how MIUI manages power consumption, and then I had to figure out how to protect certain apps from being killed by the optimization settings, and how to change them. It’s complex and confusing stuff, with settings scattered throughout the whole operating system. I am a power user and I can figure it out, but the point is that I shouldn’t have to, and I certainly don’t want to. Users’ time is precious and the OS should help them, not make their life more difficult. I imagine that a non-power user would be completely lost and just walk away with the impression that Android (not the phone itself) doesn’t work properly.
Another example where MIUI is getting in the way is by preventing some apps from doing all the things they are supposed to do. For example, the Google app would allow voice unlocking as an extension of its OK Google capability. But somehow, MIUI prevents it from happening and I can’t figure out how to make it work. Honestly, it’s not even remotely important as the disruption to the notification delivery, but the point is that manufacturer modifications can significantly affect the overall Android experience and the experience provided by specific apps.
Last but not least, there is the Android Pay issue. In my understanding (and apologies if this is not 100% accurate – I am still learning), every Android distribution must be reviewed by Google before it can pass the SafetyNet test. The SafetyNet test guarantees that your phone is secure enough to store credit card information and operate as a payment device with Android Pay. So, for example, if the device has been hacked to install a custom distribution (aka ROM), the SafetyNet test will fail and the user won’t be able to setup Android Pay.
The SafetyNet test also fails if the Android distribution you are using is officially provided by the phone manufacturer but not yet profiled by Google as part of its Compatibility Test Suite. Google takes some time to review and approve new distributions, like my MIUI 8 for the Mi Mix. Which means that I now have to wait an unpredictable amount of time before I can use Android Pay in the London Tube as I used to do with Apple Pay and iOS. I also wonder what will happen if Google approves my current OS profile and later on I’ll receive the Xiaomi update to Nougat. Will it break the Android Pay functionality?
Again, no customer should ever be forced to learn all of this and being prevented from using a feature that is touted as part of the OS. It should just work, like Apple Pay just works on the iPhone.
It’s ironic that one of the four main reasons for me to leave iOS was the dissatisfaction with the notification system and I ended up in a notification nightmare with Android.
In Android, if an app generates more than one notification, the OS displays all of them as a group, giving the user a completely useless information like: “you have received 4 new emails”. This literally is my deal breaker and I am shocked that nobody sees the flaw in this approach.
The purpose and value of notifications is to give the user enough information to act upon it. Does it sound important? Let me read the whole message. Does it look like spam? I’ll ignore it and clean it later. Is it coming from an unknown sender? I’ll read it, if I have time, out of curiosity. And so on. Notifications are the one tool that allows us to decide whether to reallocate our attention or not.
By not displaying the name of the sender, the subject of the message (if any) and a preview of it, Android is preventing the user from assessing the situation and deciding what to do. The only information this approach provides is “hey, you have something new to check inside the app X”. This is valuable only to somebody who receives a very minimal amount of notifications per day. If the volume is really low, such as four notifications per day, the user can easily afford to tap into the notification to see what’s going on every time something new arrives, regardless of its nature.
Imagine that each notification is somebody poking you during the day. Four times per day, it’s manageable. But nowadays users don’t receive just four notifications per day. Even the people that don’t use their phone professionally, like me, receive dozens of notifications per day, from the most disparate apps. Which would mean that they would be poked dozen of times per days. Then, there are business people, who literally depend on their phones and receive hundreds of notifications per day. Hundreds of pokes per day, which must be processed to assess their priority and urgency.
In my case, at any given time, the Android native notification system is telling me that I received X amount of notifications from app Z. Opening each and every one of them to decide what to do is simply unsustainable. In iOS, each notification is displayed separately, and while Apple still doesn’t allow users to fully act on them at the lock screen (which is a shame), I can still have a very good idea of what’s going on and what to do with them. Yes, I might still decide to clear all notifications with a single tap, but at least I have a very clear idea of what I ignored.
I had to fix this issue. If not, I simply wouldn’t be able to use Android. So I was forced to start a massive research on the topic as soon I as I got my new phone. I discovered that there are hundreds of thousands of users that want to customize their notifications. Only a subset of them is vocal about having separated notifications, but I am not the only one; there is enough market to sustain a dozen or so apps fully dedicated to this business.
I installed and analyzed a few of them, focusing on the ones that explicitly mention separated notifications in their descriptions. Some of them are great in terms of customization but are quite unstable and battery intensive. Others don’t solve the problem exactly in the way I need. I ended up with Floatify, which is very customizable, feature-rich, and reliable (starting from version 11).
What I learned in this learning and troubleshooting process is that all these apps require modification of a long list of OS settings and permissions that the users should not even remotely touch or have to be aware they exist. Things like:
- Granting “drawing over other apps”, “display pop-up window” and “app usage access” permissions*
- Disabling one by one (absurd!) the heads-up notifications for installed apps
- Manually enabling autostart for the third-party notification app
- Locking the third-party notification app in the task manager to avoid it being killed by the battery optimization
In my understanding, the third-party developers are not the ones to blame. It’s Android that doesn’t offer a simpler and more centralized way to replace the native notification system. Maybe because developers were not supposed to? But then, given that these apps have existed for years, why didn’t Google lock the current loopholes long ago?
The bottom line is that I left iOS in disappointment, and I ended up with the most important component of my workflow so incredibly unstable that I have to constantly check if it’s working. And it’s a shame, really, because Floatify (along with some of its competitors) offers me something that iOS never did: a notification preview with an arbitrary number of lines. This is a huge benefit because the bigger the preview, the easier is to perform that triage that is critical when operating at scale. Moreover, Floatify allows me to act on the notifications right from the lock screen in a way that is less awkward than on iOS (a tap or swipe on Android, rather than a 3D touch on iOS). I can instantaneously delete the unwanted emails and messages if I am inclined to do so.
Irony of the ironies, even if I would have Floatify (or any of its competitors) working perfectly, Android notifications would continue to be completely unreliable. If you are a technical person and have a lot of time, you are welcome to deep dive into the topic with this incredible read:
(note that the article mentions Android Marshmallows, but the situation apparently gets worse in Android Nougat)
Morever, I still wouldn’t have the notifications displayed (not grouped!) per app, as iOS used to do up to version 10. I couldn’t find an app that displays both individual notifications and organizes them per-app. Nonetheless, the multi-line preview is so valuable that I would stick to Android just because of it.
I understand that I am a power user, processing a volume of notifications per day that is an order of magnitude superior to the one received by a mainstream user. But Android could equally appeal to both categories of consumers, by allowing them to decide how the notification system should behave.
Despite everything I said in the previous section, Android notifications can be amazing. No, not the ones provided by default in Android Marshmallow or Nougat. Those are terrible. I am talking about the notifications that you can have with Floatify, assuming a significant dose of patience and abundant time to dedicate to customization.
Floatify can’t group notifications by app, in the way iOS used to do and as I hoped to have back, but can unlock other capabilities that are critical for information triage: extended notification previews and contact pictures.
Compared to what I’ve accomplished, the iOS 10 notifications seem primitive:
This alone, for me, is a reason to consider to remain on Android permanently (or at least, until Apple significantly revamps iOS).
Another sore point of Android (or maybe my Android distribution) that I noticed immediately is the instability of the Bluetooth module.
Another key part of my workflow is the fact that my notifications get displayed on a Garmin Vivosmart wristband. Differently from most wristbands on the market, this one is small, relatively elegant, water-resistant, and with a tasteful OLED display that shows the full notifications text, rather than just merely informing the user that something new arrived. It’s so good that I keep buying it on eBay or Amazon every time I lose one, even now that Garmin has discontinued it.
The Vivosmart allows me to quickly assess the importance of incoming messages even if I am busy in a meeting or during a meal, without having to look at the phone all the time. Since I started using it, over one year ago, I completely stopped feeling a “screen slave” (sometimes referred to as FOMO), and I wouldn’t go back to not using it for anything in the world.
Clearly, my system only works as long as the smartphone notifications provide the full text of the incoming messages. As I mentioned before, if Android only tells me that I got X messages from app Z, the wristband becomes completely useless and just buzzes at my wrist all day without providing any value whatsoever.
So, as soon as I made Floatify work, I connected the Vivosmart via Bluetooth exactly like I did with the iPhone, where it had always worked flawlessly. It didn’t go well and I had to try to pair it multiple times. Even worse, the Bluetooth radio now gets randomly disconnected, without any apparent reason or warning, making the wristband I depend on so much completely unreliable. On top of it, I randomly lose my settings and have to indicate again all the apps I want a notification for. I appreciate the opportunity to be extremely granular in the settings, a philosophical approach that I saw in a number of aspects in Android, but why couldn’t Garmin just capture all notifications like it does on iOS?
An initial attempt to connect some of the many more Bluetooth devices that I still have to connect went even worse. While I had no issues connecting my Bose QuietComfort 35 headphones, when I tried to pair my Parrot Flower Power sensors the Bluetooth module crashed completely, to the point that I had to first clear the companion app cache -I had to learn what it was and how to do it; yet another operation that users should not do- and then completely uninstall it.
This is literally my next challenge: making all my Bluetooth devices work with my Android smartphone. It shouldn’t be a challenge at all. I shouldn’t be dedicating a single second to troubleshooting my phone.
iOS users are so accustomed to a significant level of reliability, for both the apps and the OS itself, that they take for granted that the whole system works all the time, flawlessly. Sure, some apps may have bugs, but there is an overall very high confidence in the quality of software in the Apple ecosystem.
This confidence doesn’t come from a leap of faith, where users blindly trusted the company and the iOS developers at large. It comes from years of exposure to high-quality software, so consistently reliable to restore the trust in technology compromised by desktop operating systems. Which is why, when an iOS app is very buggy, users get incredibly vocal and unforgiving. And that’s why, now that crashes and reboots are becoming more frequent in iOS, users like me are so disappointed that they can even consider a switch to Android.
You don’t have the same feeling of reliability, and robustness in Android. It’s not just the fact that a specific third-party app doesn’t show notifications as it should all the time, or the fact that the Bluetooth module can crash. There’s more to it and I started getting exposed to it very early in my exploration.
One little example. The default clock in MIUI 8 doesn’t have time zones. That is a critical feature to me as I travel worldwide for business. So I looked for a replacement. I decided to use a third-party clock from Google. It has a pleasing design, tons of great reviews, and the few features I need. Except, as I discovered the hard way, its alarms aren’t functioning as they should. Twice, I set up the morning alarm and the app simply didn’t do its job and ring on time. When I checked what happened, the alarm simply appeared set for the day after when it was supposed to ring. Just in case I did something wrong, I tried to set up an alarm to ring in two minutes, which worked as expected, and then set up a new alarm for the day after. The alarm didn’t ring and I didn’t wake up – yet again.
That shook my initial, unbiased confidence that things on Android work reliably for the most part. I probably wouldn’t have been so impacted by the bug if it was on a system less critical than the alarm, and if it wasn’t provided by the company that more than anybody else should care about Android success. But at the end of the day, the clock I tried remains a third-party app, so I guess I can forgive a less reliable experience than the one provided by the native clock.
But then it happened again, just one day after. This time during a phone call. The telephone app decided to suddenly crash, without terminating the ongoing call, and, concurrently, lower the volume of the call at its very minimum. Restarting the app didn’t give me back control on the ongoing app, neither to terminate it nor to increase the volume. I found myself asking the person on the other side of the phone if he could kindly terminate the call for us so I could call him again.
And then, it happened again. This time with LastPass, my password manager of choice. LastPass uses a nifty trick on Android to help its users fill forms: it places a “fill helper” in the notification shade, which can be accessed at any time, even inside apps, with a simple gesture. It’s a great implementation because it allows filling forms even inside apps -something that the iOS counterpart cannot do at the moment of writing. The fill helper capability dramatically reduces the friction of an otherwise convoluted process to retrieve securely stored passwords, and it’s, I’m convinced, what ultimately helps users to keep using the app over the long term.
To play its trick, LastPass needs access to Android’s Accessibility Service. The app kindly guides the user through the Settings to make it so, and everything works as expected at the beginning. But then, randomly, LastPass gets disabled and the user loses the fill helper trick. Why? It turns out that, again, some Android distributions, like MIUI 8, feature quite aggressive battery optimisation techniques, and can disable access to the Accessibility Service for any app that uses it, without warning. Which Android distributions? There’s no way to know it upfront. Users have to find out by themselves, in yet another lengthy troubleshooting session. And once they found out, they also discover that LastPass can’t do anything to solve the problem, as fully clarified in their online support page, captured below:
So, fundamentally, Android is a world where application features and services can unpredictably crash without warning and everybody is OK with it. It seems surreal to me. I would have understood if Android was newborn, but here we are talking about an eight years old operating system. How is it possible that a similar situation is accepted as is?
The net result of these experiences, no matter if they are systemic or just a string of unfortunate circumstances, left me with the fundamental awareness that Android and its apps cannot be trusted, and everything must be tested and double checked multiple times. And that, according to the many reviews I read for very many different apps, every update can compromise the reliability of even those things that are very stable.
That’s not how it should be. I shouldn’t be paranoid about the mobile platform that is so ubiquitous in my life. We always considered mobile OSs as consumer technologies but that is not an accurate characterization anymore. Our dependency on so many apps when we are on the road, in meetings, with friends, has turned mobile OSs in mission critical platforms that must work as reliable as enterprise OSs.
Am I happy? Absolutely not.
As we start 2017, I feel like I am forced to make compromises that should really not be necessary making.
The Mi Mix is great (except for the terrible camera), but the pairing of Android 6.0 and MIUI 8 is not. As I said, I don’t know which one of the two is responsible for the issues I am experiencing, but I would very much appreciate if Xiaomi would offer the option of shipping the phone with a stock version of Android.
There are some things that I really like about the Android world over iOS, but the price to pay to have them is very high.
Android has evolved a lot in terms of usability and design quality over the years but it’s still far away from the stability of iOS. According to my experience so far, if the time users dedicate to troubleshooting the OS would cost money, an Android phone would cost an order of magnitude more than an iPhone.
For now, I’ll continue using the Mi Mix as an opportunity to learn and understand a world I have been far away from for long time. If Android would be more stable and less complex to make work, I would most certainly stick to it. So, going forward, my decision to stay will be mainly influenced by three events:
- What will happen once Xiaomi releases MIUI 9 based on Android 7.0 Nougat
- What Samsung will do with the Galaxy Note 8
- What Apple will do with the iPhone 8
I won’t enable comments on this post to avoid a sterile debate about what platform is better. Also, it doesn’t matter if I did something wrong, or if your mileage was way better than mine. The point is that the OS should make impossible to experience any of the things I am experiencing, no matter what the root cause is.
If you want to engage in a conversation feel free to reach out to me over Twitter @giano
*Some of these permissions are hidden in some Android distributions. To find them, you need to install yet another third-party app called Activity Launcher and follow a highly technical, convoluted and extremely risky process.
Thanks to the effort of companies like Red Hat, Google, Netflix and many others, it’s safe to say that open source is no longer a mystery in today’s IT organizations. However, many struggle to understand the nuances that make a huge difference between vendors commercially supporting the same open source technologies.
Should the general public have any interest in understanding those nuances? A few years ago the answer would have been “no.” However, today, understanding those nuances is critical to select the right business partner when an IT organization wants to adopt open source.
As more vendors start offering commercial support for various projects, from Linux to OpenStack to Kubernetes, the need to understand the real difference between vendor A and vendor B becomes critical for CIOs and IT Directors.
In Red Hat, we have a TL;DR answer to the question “What makes you different from vendor XYZ?”. Our short answer is we have more experience supporting open source projects, and we participate and nurture the open source communities in a way most other industry players simply don’t.
This is a true statement, but what does it actually mean? How does that translate into a competitive advantage that a CIO can appreciate when selecting the best business partner to support her/his agenda? Today, I’ll try to provide the long version of that answer, with some simplifications, in a way that is hopefully easy to understand for business-oriented people.
To narrate this story, let’s take as example a fictitious open source project that we’ll call “Project-O” and divide it into three chapters:
- Chapter 1 – Innovation brings instability
- Chapter 2 – Instability is exponentially difficult to manage in large projects
- Chapter 3 – How vendors manage instability is their competitive advantage
Chapter 1: Innovation brings instability
At any given time during the lifecycle of Project-O, any individual in the world, can contribute a piece of code to:
- introduce, complete or fix a feature (innovate)
- improve performance (optimize)
- increase reliability (stabilize)
To serve the business, we need to innovate and optimize. To protect the business, we need to stabilize. The continuous tension between these two needs compels hundreds or thousands of code contributions to Project-O at any given time. The bigger the project, and the larger the community supporting it, the more code is submitted at any given time.
Let’s use an analogy: if Project-O is an existing house, each code contribution is a renovation proposal. Imagine having hundreds or thousands of renovation proposals per day.
Just like renovation proposals, new code, especially the one that introduces new features, can be written in a very conservative way or in a very disruptive way:
- It’s conservative code when adoption doesn’t break other parts of Project-O. In other words, the individual who wrote the code has been mindful of the “backwards compatibility.”
In our analogy, it’s when a renovation proposal doesn’t force the house owner to demolish existing walls or do some other major intervention to accommodate the proposed changes. Imagine painting a guest room.
- It’s disruptive code when adoption breaks other parts of Project-O and requires some major reworking.
In our analogy, it’s when a renovation proposal requires the house owner to make drastic changes to the plumbing system in the only bathroom. It can be done, of course, but it implies temporary instability and disruption inside the house.
Obviously, the more conservative the code, the fewer chances there are to innovate. And vice versa.
When an individual wants to improve Project-O, he or she has to submit the proposed code to a group of individuals, called “maintainers”, that govern the project and have the mandate to review the quality and impact of the code before accepting it.
A maintainer has the right to reject the code for various reasons (we’ll explain this in full in Chapter 3), and needs to make a fundamentally binary choice: requesting strong backwards compatibility or allowing disruptive code.
In our analogy, the maintainer is the house owner that has to carefully evaluate the pros and cons for each renovation proposal before approving or rejecting it.
If the house owner wants an amazing new wing of the house, he has to be ready to tear down walls, rework the plumbing system, and deal with a fair amount of redesign. In similar fashion, the maintainer that wants to innovate and quickly evolve Project-O has to allow more disruptive code and deal with the implications of that disruption.
To address the business demand, especially in a highly competitive market like the one we have today, the maintainer has no choice but to allow disruptive code wherever possible*.
How a vendor deals with that disruption makes the whole difference, and can truly define its competitive advantage. This is where things get nuanced and interesting.
Chapter 2: Instability is exponentially difficult to manage in large projects
As we said, the larger the community behind an open source project, the larger the number of code contributions submitted at any given time. In other words, the amount of things you can renovate in a standard apartment is infinitely smaller than the number of things that you can renovate in a castle.
Let’s say that Project-O is a fairly complicated open source project, equivalent to an hotel in our analogy. For the maintainer of Project-O, the challenge is to consider and approve enough code contributions to keep the project innovative, but not too many to be overwhelmed by the amount of things to fix at the same time. Imagine renovating the rooms in one wing, rather than all of them at the same time.
When very many functionalities of Project-O break simultaneously due to too many code contributions, the difficulty of fixing them all together in a reasonable amount of time grows exponentially. The problem is that the market cannot wait forever for Project-O to become stable again to be used. The innovation provided by the newly contributed code must be delivered within a reasonable amount of time to be used in a competitive way. Usually, large enterprises struggle to adopt a new version of Project-O if a stable release is provided faster than every 6 months. However, the same large enterprises won’t wait years before receiving a new stable release of Project-O.
Again, it would be like the hotel owner in our analogy would approve 10,000 renovation proposals all executed at the same time, each one breaking existing parts of the hotel. Imagine upgrading the electrical, plumbing, heating, and remodeling the restaurant all at the same time. Fixing the resulting disruption would be so incredibly difficult to render the hotel completely unusable for an excessive amount of time.
According to what we said so far, the maintainer sets goals and deadlines to stop accepting code contributions. After the deadline is met, no more code contributions are applied and the community works to stabilize the new version of Project-O enough to be usable.
However, “usable” doesn’t necessarily mean “tested” and “certified as reliable”. It’s the same difference that goes between “I tried to run the code a dozen times and everytime it worked” and “I tried to run the code thousands of times, under most disparate conditions, and I know that it will always work in the conditions I tested”. This is where competing vendors can make a business out of an open source technology that is fundamentally free to access and use for the entire world.
So, at a certain point, the maintainer freezes code contributions for Project-O. Subsequently, competing vendors look at all submitted code contributions and decide how much of it should be commercially supported** after their own extensive QA testing.
Because of this, the open source version of Project-O, called “upstream”, is not necessarily identical to the commercially supported version of Project-O provided by vendor A, which in turn is not necessarily identical to the version of Project-O provided by vendor B. There are small and big differences in between these three versions as they represent three discrete states of the same open source project.
Vendor A and vendor B need to make a decision on how much of Project-O they want to commercially support, trying to balance the need for innovation (addressed by newly disruptive code being accepted by the maintainer) and the exponential complexity of fixing the amount of instability caused by that innovation.
Chapter 3: How vendors manage instability is their competitive advantage
At this point, you may think that the differentiation between vendor A and vendor B is in how savvy or smart they are in “making the cut,” in how many new code contributions to Project-O they decide to support at any given time. In reality, that is only partially relevant. What really differentiates the two vendors is how they deal with the instability caused by the newly contributed code.
To manage this instability each vendor can leverage up to three resources:
- Deep knowledge
- Special tooling
- Strong credibility
When much of the newly contributed code is disruptive in nature, many things can break at the same time within Project-O. Sometimes the new code breaks dependencies in a domino effect that is very complicated to fully understand. Fixing all broken dependencies quickly and effectively requires a broad and deep knowledge of all aspects of Project-O. Like the hotel owner who intimately knows the property inside and out through many years of renovations, and has a very clear idea of all the areas, obvious and non-obvious, that the changes in a renovation plan imply.
This is why vendors involved in the open source world make a big deal of statistics like number of contributions to any given project, like the ones captured by Stackalytics. Knowing how much and how broadly a vendor is contributing to an open source project may seem a superficial and sometimes misleading metric, but it’s meant to measure how deep is the knowledge of that vendor. The deeper the knowledge, the more skilled the vendor is in managing the instability created by disruptive code.
No matter how deep the knowledge available, at the end of the day a vendor is an organization made of people, and people can make mistakes. Human error is unavoidable. Hence, to mitigate the risk of human error, some vendors develop internal special tooling that assists humans in understanding the impact of the instability created by newly contributed code, and operate the necessary changes across the board to make Project-O as stable as possible, as quickly as possible.
Without a deep knowledge about Project-O, it can be impossible to develop and maintain any special tooling. So, human capital is the biggest asset a vendor involved in open source has.
Through deep knowledge and/or specialized tooling, a vendor can identify and fix the broken dependencies in open source code faster than its competitors, but there’s a last challenge: submit the patches to the maintainers and be sure that each part of Project-O is fixed for the newly contributed code to work in time for it’s release. If vendors get fixes back “upstream,” they don’t have to maintain those fixes alone. But, for the fixes to be accepted, vendors have to prove their code helps Project-O, not just themselves.
Back to our analogy: the hotel owner accepted a certain number of renovation proposals to build a new wing, and compiled them in a renovation plan. The plan is ambitious and the contractors executing it will break the current plumbing system in the process. Nonetheless, the plan must be completed within 3 weeks or the hotel will not remain competitive enough to justify the renovation plan itself. The contractor that is building the new wing breaks the plumbing system, as expected, and must ask for modifications from the contractor that owns that system.The owner of the plumbing system is willing to help, of course, but to comply he has to review the new wing project and the proposal changes to the plumbing system, and, if he agrees with them, order new pipes. The whole process would normally take 5 weeks, enough to compromise the whole renovation plan.
The only way to save the day is if the contractor who is building the new wing has a strong credibility in plumbing. And that credibility is so strong that the requested modifications to the plumbing system are accepted without questioning, and the pipes are ordered with an express delivery. In other words, the owner of the plumbing system trusts the wing builder so much that a further review is not necessary.
Such credibility is not granted lightly in the open source world. Few individuals are granted that sort of trust, and that sort of trust is earned over years of continuous contribution of new code and demonstration of deep knowledge.
Thanks to these amazing open source contributors that decide to join a vendor, that vendor is more or less able to fix broken dependencies in a timely way. In fact, differently from what could be assumed, highly trusted open source contributors are not easily hired and retained through standard HR practices. They independently decide to join and stay with a vendor primarily because they believe in the mission of that vendor, in how that vendor conducts business.
So, in summary, the difference between two vendors operating in the open source world boils down to how capable they are in managing the instability caused by innovation. That differentiation is very subtle and hard to appreciate for anybody until it’s time to face the instability.
* The deeper you go into the computing stack, all the way down to the kernel of the operating system, the smaller the amount of disruptive code is allowed to compromise the reliability of mission critical systems, and their capability to integrate with a well established ecosystem of ISVs and IHVs. That’s why it’s much harder to innovate at the lowest level of the stack.
** Commercially supporting open source software means that the vendor performs the QA testing to verify code stability, provides technical support in case something doesn’t work, issues updates and patches for security and functionality improvements and certifies integration with third-party software and hardware components.