Challenging Software Management

Seldom does an article pluck a thought lurking in the corners of our consciousness, place a spotlight on it, and then reveal it holds the key to unraveling deeply seated beliefs. Matthew Stewart’s The Management Myth is such an article, a rare piece that takes a practical look at management theory – the history, education, and personal experiences. Reading the article, I found myself thinking, “Finally a management consultant giving an insightful critique of management rather than hype!”

In his article, Stewart explains how Frederick Winslow Taylor came up with the first industrial-era ideas of management theory in 1899. Taylor was working for the Bethlehem Steel Company when he invented a scheme to get workers to load pig iron bars onto rail cars more quickly. He later went on to apply his approach to other business problems and write a book titled, The Principles of Scientific Management.

Even at the time, it was clear that Taylor’s conclusions were more pseudoscience than science. Taylor never published raw data for his pig iron studies. And when Taylor was questioned by Congress about his experiments, he casually admitted to making reckless adjustments to the data, ranging from 20 percent to 225 percent.

Despite serious protocol flaws and Taylor’s failure to adhere to even the spirit of the scientific method, management has doubled down by embracing empirical measurements that are meaningless indicators. The purpose of these number exercises is to convince business leaders that the right things are happening. The belief that if you don’t measure it, you cannot manage it continues unabated within unproven methodologies such as Earned Value Management (EVM) or the prioritization of software backlogs using ROI, NPV, or other numerical or financial metrics. The fact that these numbers are fictitious hasn’t slowed anyone down from using them.

But here is the question Stewart brilliantly points out in his article that sharpens the argument against these practices. How is it possible that empirical management continue to be used when the theories and approaches themselves are not held accountable to the very same metric disciplines they force on everyone else? Every development team must prove, using numbers, that they are on track – but nobody has proven that such accountability is effective.

I can confirm, anecdotally through my personal experience, that project management “number exercises” do not lead to improved performance, better risk management, higher quality, or customer satisfaction. In my experience, for what it’s worth, the more “sophisticated” a management approach, the more likely it will have the exact opposite effects.

In my next blog, I will talk about what we need from software management, which is to set constraints to resist the natural temptation to build “something” rather than the right thing.

How fast is Fast?

Meet Cindy, an experienced program manager overseeing a substantial software development effort with over 100 team members. Her program’s funding was allocated on a solid promise of innovation and efficiency. Delivery is scheduled 36 months from commencement and everything appears to be going fine.

The usual story.

At this point you might expect the usual story about a big software project that’s about to go off the rails. The narrative could easily follow how Earned Value Management (EVM) has been used by Cindy and her leadership to report nominal progress until about 2/3rds through the period of performance. Then, in the final 3rd of the program, the first domino falls when a segment lead blinks in a game of schedule chicken. Successive dominos cause the curtain to slowly unveil the program’s true condition. Stakeholders realize that 2+ years of EVM reporting was pure fiction. The program had concealed poor software quality, low productivity, and fumbled requirements. The software is nowhere near delivery, and the program has been on a trajectory to overspend by 500% since commencement. Cindy will be sacrificed unless she outmaneuvers the hatchet by fixing blame on a supplier. We’ve all seen this story unfold before.

But this isn’t that story.

Nope, in this case Cindy and her team are doing everything right. They avoid EVM, mainly because Cindy won’t suffer being lied to and despises inauthenticity. She runs her organization old-school, with impossible deadlines and tough accountability. She devotes her days to assertively guiding and assisting her senior leadership; and in turn, her senior leaders spend every day on the production floor. Progress reports from the program’s segments are honest, informed, and direct.


Cindy’s success has nothing to do with Agile, it has everything to do with her finely honed leadership instincts.

By conventional measures Cindy is nailing it as a PM. She earned the respect of her senior leaders, who earned the respect of junior leaders, who earned the respect of the specialists on the floor. People have good jobs they enjoy and are producing a good product.

Cindy’s program is running on time, slightly under budget, and the system looks great. The technology appears to be fulfilling its mandate to deliver innovation with a positive ROI. Cindy’s bosses are happy, her senior leaders are happy, her workforce is happy, and her customers are happy.

So, what’s wrong?

The only caveat is that a different group, led by Rachel, could have delivered the same or better value in just 18 months with a staff 1/3rd the size of Cindy’s at a fraction of the program cost.

This raises a question, how fast is fast? Delivering value on-time and on-budget is an accomplishment, but it might not be evidence of excellence. In the absence of competition, it’s difficult to know whether a program is truly efficient. Our jaded IT instincts tend to label program management ‘good’ when it merely averts disaster; and we tend to label any ability to horseshoe a deadline as ‘fast’. This is particularly the case with software projects.

However, when a competitor over-delivers by doing innovation better, faster, and cheaper – the comparison reveals a hard truth about us. It often reveals that we’ve been listening to the wrong critics and measuring our success against the wrong scales.

Unfortunately, for most IT programs – there’s no asymmetric competition and therefore no objective way to define ‘fast’. The lack of an aggressive benchmark is what leads most IT programs down the path of inefficiency and low productivity.

Are you Cindy or Rachel?

In this example – Cindy is good, but Rachel is better. Rachel has the same good qualities as Cindy, but she’s more aggressive and innovative. Rachel would never accept a leadership position over a 36-month program when 18-months is plenty. Rachel would never hire 100 people when 25 will do. Rachel scrutinizes every person; layering her teams, not with good people, but with exceptional people.

Rachel won’t get bogged down implementing 1,000 features that stakeholders claim to want, she’ll focus on essential features she knows will differentiate the innovation and deliver maximum value. For Rachel it’s more about the product than the project.

Would Cindy be Cindy in an organization with Rachel?

In the absence of competition, Cindy appears strong. However, in the presence of Rachael she looks weak. The question is whether Cindy would step up her game if she knew she was competing against Rachel? Would having Rachel inspire more Rachels in this situation?

Think about your own IT organization, are your senior leaders driven to aggressively push the envelope of innovation and efficiency at every opportunity? Are you cultivating Cindys or Rachels in your senior leadership program?

Most organizations in real life would LOVE a Cindy – because most real life PMs are truly awful, and very few IT organizations have ever seen a Rachel. Sadly, the vast majority of software program managers are incompetent. This tips the scales toward lower expectations, causing many CIOs to confuse completion with success; a dangerous association if the organization ever encounters a truly aggressive external competitor.

This slippery slope is how lumbering companies with entrenched and well-defended market positions get outmaneuvered by small nimble companies that smell weakness and opportunity. This is also how Government projects that could easily be completed for under $5 million turn into billion dollar boondoggles.

If you prefer Cindy – you’re the problem

There are several things about Rachel that frighten executive leadership. For example, that little quip about not doing the 1,000 features everyone asks for, and instead doing the ‘right features’. Many executives find it discomforting to trust the judgement of an appointed leader to arbitrate what features are the ‘right features’, especially when a gaggle of subject matter experts is available to establish consensus on such priorities.

Another cause for executive discomfort is Rachel’s need to experiment and innovate. If she actively experiments with how she manages programs, then she’s obviously not following a normal process – and if she’s not following a normal process, how do we know she’s doing the right things incrementally along the way? EVM is comforting, right? Also, what happens if she gets hit by a bus? If the IT organization relies on Rachel’s elevated talent and judgment, then she becomes an irreplaceable cog, which represents a single point of failure. This is quite frightening.

This logic is how executives think, or at least how executives who aren’t very aggressive think. In the end, given a choice between Cindy and Rachel, such executives would vastly prefer Cindy.

However, as with any strategy, the enemy gets a vote.  If you choose Cindy as the safe choice, but your competition chooses Rachel – they’ll clean your clock before Cindy ships.

As you hire and promote senior IT leaders, is your goal to stockpile Cindys or Rachels?

Just something to think about.

BY: Thad Scheer
Copyright 2016, Sphere of Influence, All Rights Reserved
Follow me on Twitter @ThadOfSphere

Android vs. iOS from a Developer’s Perspective – Part 5

App Store Deployment

This is the fifth and final post of our five-part series on Android vs. iOS development from our microProduct Lead, Mark Oldytowski.

Now we’re in the home stretch: your app is finished and tested, so it’s time to get it out in the store. Of course, before you get to see your app out in the world, you’ll have to do a decent amount of prep work regardless of which platform you are releasing on. Both Android (Google Play) and iOS (iTunes) require you to sign up for an account and pay a yearly fee to be part of the developer network for distributing apps. Apple takes the account verification portion much more seriously, requiring a tax id number even if you aren’t selling the apps, and also vocal confirmation that you are who you say you are. All in all, it will take a day or two to complete the Apple developer account process due to the verification and getting your paperwork in order, while the Android store account is only a fraction of that due to its less stringent nature.

Speaking of stringency, one of the biggest differences between releasing an app for iOs vs. Android involves the review process for each release: In a nutshell, Apple has one, and Android does not. The Android Marketplace is the wild west of app stores, anything goes initially, and it isn’t until you do something really, really wrong (upload viruses, steal info, etc.) that they run you out of town. The advantage of this is that you can get new releases and patches out almost instantly, no need to wait on a review process or worry about your app being rejected (unless you fail the code validation process). The disadvantage of this is that there is also a lot of junk on the store, and users are less trusting of apps from unheard-of developers. On the Apple front, submitting your app puts it into a review queue, and it could be anywhere from one day to two weeks or more (average wait time seems to be about 8 days) for it to begin the review process depending on how busy Apple is, the type / size of the app, technologies used, data being shared, etc. This does provide a decent barrier to entry for keeping the junk apps out, but can be annoying if you have to get a critical release out or if Apple decides to reject your app due to a terms of service violation (which will happen eventually, they seem to change it every few weeks). Care must be taken on the Apple side to stay up to date with the rule changes to try to mitigate the app rejection scenario.

Another note about Apple changing the terms of service for the developer program: sporadically they can cause mysterious deployment errors and crashes within Xcode. There was an occasion where I was trying to deploy a build for release to iTunes, but during the verification process Xcode would hard crash with a segmentation fault and no additional information. I spent hours retracing all of my steps, checking out the build configuration to make sure I hadn’t changed anything, and then searched endlessly on the support forums to see what was going on. Turns out, Apple had updated their terms of service and required you to hit “Agree” in iTunes Connect (completely independent from Xcode) on the new policy. How anyone would know these two issues are related is beyond me, and it becomes a major problem when you are trying to hit a release deadline and these kinds of issues crop up.

Now to get back on topic, both Android and iOS will require you to create the app description page prior to submitting it to the store. These details are similarly straightforward on both services (name, description, cost, keywords, available regions, etc), with Apple just requiring a little more in the fact that you have to supply specific screenshots and icons for each device size you support while Android lets you get away with more generic sizes. On the Apple side, you will also have to provide information for the tester to use during the review process if it’s relevant to your app, such as account login information. Once the app listing is completed and the app is ready for submission, you will need to do a packaged release build within the IDE to submit it. For Android, simply generate a signed APK in Android Studio, upload it to Google Play, hit submit and your app is in the store within about an hour. Apple keeps this part fairly simple as well, but with the slight advantage of having Xcode integration with the app store so you can submit your build directly from there (assuming all of your certs and provisioning profile information is correct and the bundle identifiers match perfectly). Once you submit your build, just go back into iTunes Connect, select the build for release, press submit and then wait about a week to get it out into the store. Make sure everything on the Apple side is ready to go when you press submit, as re-submitting a new binary, even prior to review, will put you back to the end of the review line. Submitting patches apps on each system is very similar to new releases, so you will have the same week delay on the Apple side unless you can get them to expedite the review process (which doesn’t happen often). With each new version of iOS, there is a mad dash to submit app updates, often due to Apple changing framework features, so it’s best to be prepared with patches as far ahead of time as possible.

One final note on the store side: while Apple only has one app store available on its devices (jail-broken devices not included), the hands-off approach to Android has lead to multiple 3rd party stores popping up. The other big player on the Android market, Amazon, has it’s own app store for it’s Kindle devices, which requires a another account / submission process (and potentially new build) to become available on those devices. In addition, many smaller companies have started up unofficial app stores, which require the same. It’s nice to have the option out there to add some competition (leading to cheaper prices, etc.), but it becomes higher maintenance to keep track of the different stores and keep the release updated on all of them at the same time.


Android iOS
+ Lack of review process gets apps and patches out quickly + App reviews promote some quality level for apps, reducing junk apps
– Junk apps in the store cluttering things up for everyone else + Single app store keeps things simple
– Segmented app stores cause higher maintenance – Review process can be lengthy, especially for critical patches
– Constant terms of services updates cause rejections and odd issues.

And Finally….

After that long, essay-like analysis between the two platforms, one feature sticks out in my mind as the reason I would pick one OS to develop on vs. the other: device segmentation. The IDEs and app stores are comparable, the frameworks have similar features, and language-wise, I can eventually get the same work done on both platforms, but what I can’t get past is the testing time involved in creating an Android app that actually works universally. It might be easy to get 95% there, or even 99%, but there will always be that oddball device out there that will cause you issues, and from a software engineer’s perspective, that is a tough issue to get past. Plus, with the terrible performance of the Android emulators, you would need physical devices for each screen size you test on, which will get pretty expensive. For all its problems, at least iOS is fairly consistent across all its available devices (for now). It has never taken more than a few hours to adjust for all of the available screen sizes, and performance for a run of the mill app is decent enough on all devices.

Read Part 1: Development Environment and Project Setup
Read Part 2: Language and Framework Features
Read Part 3: UI Design Tools and Controls
Read Part 4: Testing and Debugging

Download LMK
Download PubRally
Download Quack-a-pult

Come out and see us at AgileDC 2015!

They thought we were so awesome that they accepted four of us at Sphere to present at AgileDC this month.

Come join us on October 26th, 2015 at the Kellogg Conference Hotel in Washington, D.C.

Agile DC is the largest Agile community organized event in the Washington D.C. area. It brings together thought-leaders from both the government and commercial industries.

“In recent years, Agile has been extremely popular within the IT community, however it can be difficult to implement and maintain” says Thad Scheer, Managing Partner at Sphere of Influence. “We believe we have valuable information to share with the community and are looking forward to providing our insight into how to achieve a successful and productive Agile environment.”

Register for AgileDC

Topics include:

 Scott Pringle, Executive Vice President: Horseshoes, Hand Grenades, and Agile 




Analytics Studio – Sphere of Influence – on Periscope and Meerkat

Graduate from Zombie to Master

Don’t be like your friends who will leave college to become zombies. A zombie can wake up in the morning, drive to the office, go to work, drive home, watch TV, sleep, and repeat without a single truly conscious moment.

Don’t graduate to that ‘life’, which is comfortably painless, but totally unrewarding. College graduation should be about liberation, not about joining the Walking Dead. Contribute something extraordinary to the world by doing things your peers won’t be able to imitate. A career isn’t about your commute, your job, your boss, or your position. A career is defined by that one thing you do that nobody else can.

Join us on Periscope and Meerkat Friday, September 18th at 12:30pm EST.
Follow @Sphere_oi




We’ll talk about what YOU CAN DO to make yourself a ranked master, no matter who you are. Don’t graduate and become another zeroed-out zombie that mindlessly wanders between work and home. Neural and cognitive plasticity offers a liberating alternative, and it’s something you can control.

Tweet questions to @Sphere_oi LIVE during the broadcast.


Sphere of Influence specializes in data analytics, machine learning, software engineering, and digital product development. Our studios are deeply technical, we are fast moving, we value honest nice people, and we have actual passion.

Job postings


Agile Easy vs Agile Hard

Many of our clients have transitioned to Agile but are disappointed with the bottom line gains for the company.  On closer inspection, we find widespread adoption of the easy part of agile and equally widespread avoidance of the rigorous software development practices that lead to rapid deployment of high quality software products.  At Sphere we make the hard stuff look easy while bringing measurable improvements in productivity and business value.

Agile Easy vs Agile Hard

  • Easy
  • Stand Up Meetings
    Short Iterations
    Pair Programming / Peer Reviews
    Barely Sufficient Requirements
    Less Documentation
    Less Process
    New furniture
    New Office Layout
    Calculating Velocity
    Not committing to delivery
    Feature Centric
  • Hard
  • Weekly demonstration of working code
    Continuous Integration
    Continuous Automated Test
    Continuous Velocity Improvement
    Automated System Test
    Automated Acceptance Test
    Automated MTTF Test
    Automated Deployment and Test
    Stop-the-line Quality Policy
    Product Focus
    Automated Code Quality Gauntlet
    Automated Code Quality Metrics
    Automated Architectural Enforcement
    Metrics Based Sprint Planning

While the things on the left make us feel Agile,

the things on the right make us productive.

Incorporating Strong Center Design Into an Agile Project

Strong Center Design is an approach we developed at Sphere of Influence to unify software design where every feature and design choice enhances a product’s impact by creating a single, powerful impression. Our approach is an alternative to random design choices that lead to a mish-mash of competing centers.

One of the challenges we overcame was integrating Strong Center Design with an Agile culture where it is a matter of ritual to prioritize features and design choices on an iterative and incremental basis. To integrate Strong Center Design with Agile, we considered five distinct approaches and examined the weaknesses of each one.

1. Design up-front
Insert a dedicated design step (much bigger than a sprint-zero) before launching the first Agile sprint. A good analogy is the ‘pre-production’ step used when filming Hollywood movies.
Weakness: Has all the same pitfalls of waterfall (phase-gate) development.

2. Design in each sprint
Do a little design at the beginning of every sprint.
Weakness: There is never enough time to develop a good design without delaying developers. Either design or productivity suffers.

3. JIT design a few sprints ahead of development
Two separate parallel workflows in each sprint: one for design (about two sprints ahead) and the other for development. Use Kanban signals to trigger JIT design work before it is needed by the next sprint.
Weakness: Not ideal because the further ahead design gets, the less Agile the process becomes. A design team can also end up supporting several different sprints at the same time – the next design sprint and the current development sprint.

4. Dedicated design sprints
Sprints oscillate between design-focus and development-focus.
Weakness: Everyone is tasked to work design during design sprints even if they lack the skill or desire to work on the design. The reverse is true for development sprints.

5. Designer and developer partnerships
Known in academia as ‘fused innovation’, this pairs a design professional with one or more developers.
Weakness: Federating designers contradicts the objective to achieve wholeness in the design. It is difficult to implement with a distributed team.

To make Strong Center Design compatible with Agile, we discovered a hybrid approach worked best, formed from two of the options.

First, do a little design up-front. Don’t design the entire product, but take some time to establish a strong conceptual center something we call the North Star. The North Star creates a unified focus that everyone can agree on. The North Star also fleshes out the design language plus any design principles that will shape the product. Do this work up-front before the first sprint.

Once development starts, we found the best way to achieve Strong Center Design is to replace the typical Product Owner role in Agile with a designer to lead the team. This is the most controversial aspect of our approach, as many people regard consensus-based (i.e. collaborative) prioritization as a core tenant of Agile. However, relying on consensus to prioritize design choices tends to optimize a single part of the product at the expense of the whole.

While not perfect, this blend succeeds in giving us the best of both worlds. We get a design driven product with a single unified identity. It also gives us the production efficiency of Agile.

Vet Your Agile Advisor with 5 Questions

Whether an internal hire or a hired gun, onboarding someone to advise you about Agile is a risky but important gamble. Not all Agile advisors are equal; there are good ones, bad ones, and some in-between ones.  How can you tell the difference?

Just to be clear – Agile is the worldwide standard for software development today. However, Agile is a philosophy, not an instruction book. Like any philosophy, Agile is susceptible to variation in interpretation and implementation. Such variation has extreme consequences, sometimes creating a fast-paced highly productive culture and other times a lumbering beast that drains budgets with low workforce engagement and questionable delivery throughput. It should come as no surprise that simply ‘being Agile’ is no cure for anything.

I often work with post-Agile organizations; i.e., organizations that fully transitioned to Agile Software Development. The top complaint I hear from executive management in these organizations is that productivity is the biggest disappointment.

I agree that many organizations suffer chronic productivity problems even after transitioning to Agile. The root cause isn’t Agile itself, it’s the type of Agile.

As a philosophy, Agile is non-hierarchical self-organizing egalitarianism, consensus decision-making, community ownership, deep collaboration, transparency, and open communication. From this perspective, Agile could be a 1970’s era hippie commune, complete with spiritual leader, a.k.a. ‘Agile Coach’.  Viewed from this perspective there is no emphasis to drive workforce engagement, productivity, high-margin returns, or blistering speed.

Relax man! We’ll get there when we get there

Vet Agile Advisor - 5 questions - artwork

However, the hippie bus is not the full story with Agile. The Agile philosophy also embraces extreme disciplines around test generation and test execution, continuous integration, individual craftsmanship, proper software engineering, lean workflow optimization, small elite teams, and continuous delivery. Agile can be aggressively aerodynamic packing tons of horsepower into a small footprint, if that’s your thing.

Unfortunately, few Agile practitioners go deeper than the commune-style egalitarianism aspects of Agile.

Vet Agile Advisor - 5 questions - Executive Perspective

Imagine yourself in the position of vetting a new Agile advisor. What questions do you ask?

Here are 5 questions we designed to help you vet whether someone is performance-oriented:

1 – How does Agile fail?

If they answer with some variation of “lack of buy-in from management or stakeholders” then run, because they plan to blame you for any failures. As with all philosophies, Agile can fail; and it does…predictably. Someone with real experience squeezing high-performance out of Agile should be familiar with its modes of failure. Those who sprinkle ‘Agile fairy dust’ over organizations, generally refuse to accept that Agile can and does fail. These people will be unable to articulate the circumstances under which it fails or why. Likely those people will answer your question from the ‘Agile is perfect – everything else is the problem’ perspective. That perspective is naïve and not at all useful.

2 – If productivity is my #1 concern, how does that impact Agile?

I’m not suggesting productivity should be your #1 concern, but you need to know how your Agile advisor reacts to such prioritization. What components of their approach to Agile would be emphasized or deemphasized to optimize for productivity? Lower-end advisors will avoid directly answering the question and focus on the definition of ‘productivity’. Productivity is seen by lesser Agilists as an outdated 20th century ode to Taylorism. They’ll talk about how it’s not proper to think in terms of productivity in the modern age and how software organizations are much more complicated than that. Of course, this attitude is partly to blame for so many IT shops that have chronic productivity problems after transitioning to Agile. Not only do many Agile advisors not know how to optimize for productivity, they don’t recognize productivity as something important.

3 – If innovation is my #1 concern, how does that impact Agile?

This question is intended to separate the master class from the middle. This may come as a surprise, but innovation is rarely discussed within Agile circles. It’s is not an ugly word, like ‘productivity’, it’s just not discussed much. The lesser Agilists believe innovation is a natural ‘happy accident’ that comes from self-organizing egalitarianism, consensus based decision-making, community ownership, deep collaboration, and open communication. These people lack a basic understanding of innovation leadership and certainly don’t know how to incubate it alongside Agile. The truth is, innovation is not addressed by Agile. If innovation is a priority, additional workflows are necessary to generate, select, and develop fresh high-impact ideas.

4 – How will I know if Agile is successful?

A gripe many executives express 3 (or 5) years after embarking on an Agile transformation is – they are still transforming. When does the train arrive at the station? How can you tell when Agile is working? Just as lesser Agilists are unlikely to blame Agile for any failures, they are equally unlikely to promise measureable improvements or success. A grittier person will take this question seriously and put their reputation on the line. Think of it this way, if your organization doesn’t feel a sharp improvement in feature delivery and quality at lower production costs … why exactly is Agile being embraced?

5 – If we only adopted three things from Agile, what three practices will make the biggest difference?

This question separates the bottom of the barrel from the middle. If the answer covers daily standups, retrospectives, planning poker, pair programming, story walls/backlogs, or anything like that then fail. If their list of three includes ‘iteration’ then that’s borderline okay – but iteration is hardly unique to Agile (even Waterfall had it) and should be done regardless. If their list includes Continuous Integration or Continuous Delivery, then gold star; in fact it’s hard to envision a correct answer that omits those two. Also give gold stars for advanced Test Automation, but the answer must go beyond the mere basics of creating unit tests that will inevitably wind up in the ‘technical debt’ pile; i.e. the answer must key on ‘advanced’.  The double-gold star goes to the person who prioritizes ‘technical excellence’, particularly with respect to team member selection. It’s not that the 4 values and 12 principles of Agile aren’t all important, but some are way more important than others. Does your Agile advisor know which those are, or at least does she/he have a strong opinion?

Finally, for extra credit you could ask them how they would implement Agile without Scrum.  Many practitioners of Agile only know Scrum (it takes < 1 hour to become a Certified Scrum Master). Challenging your Agile advisor to formulate an approach to Agile without Scrum will test their knowledge of ‘first principles’ rather than rely on their hour of training. Like asking an artist to paint something in black and white; sure they have skill to paint with color – but it’s a good test to see if they can make something beautiful from a smaller palette.

Android vs. iOS from a Developer’s Perspective – Part 4

Testing and Debugging

This is the fourth post of our five-part series on Android vs. iOS development from our microProduct Lead, Mark Oldytowski.

You’ve made a great app, so now you just have to make sure it actually works on the majority of devices out there (don’t skimp out and only test your own device, it will bite you quickly). Luckily, both environments provide you with the tools you need to debug your code and each have ecosystems for pushing your software out to local and remote testers. Testing your app with external users will be key to finding issues that everyday users might run into, but you wouldn’t notice as a developer (users do some really, really strange things you would never expect).

Within Android Studio, you will find an easy to use debugging environment with all the standard features including breakpoints, console output, memory analyzer, and hover over variable inspection. As mentioned earlier, Android Studio has an emulator you can use for debugging, but anyone besides a master of patience will find it fairly useless to use on a regular basis. When you are ready to enter the real world and use a regular device for testing, just tap the build number 7 times (weird, I know), enable debug mode, and you are ready to go. Runtime debugging is straightforward with Android, but there are a few issues. Right away, you will likely need to hunt down the USB driver for your specific device before Android Studio will even recognize it, which can be difficult with off-brand devices. After that, you may randomly drop the connection between the debugger and your device, or it might just stop recognizing the device completely, so you may need to restart Android Studio / your phone to get things back to normal. Other than those few problems, the experience with Android Studio is enjoyable with great support for catching runtime exceptions and crashes, sans the few segmentation faults that can occur when using specific device features (camera, device storage, etc.) and require serious research to figure out what is causing the crash (although this happens on iOS as well).

On the Xcode side, you will find similar features to Android Studio, but with three key differences: usable emulators (as mentioned before), paid device debugging, and provisioning profiles. Apple requires you to join the developer network before you can deploy your app to a physical device, so if you don’t pay the buddy money (plus signing up for the store, more on that in the next post), you won’t be able to deploy your app to any physical device (unless you jailbreak it). This may not impact most developers who plan on deploying their app to the store anyways, but it could affect new developers who are trying to learn iOS development and aren’t sure if they want to pay the money and stick with it. Xcode also adds the complication of requiring the use of a “Provisioning Profile” for features such as push notification, In-App purchased, etc., and any program using these features requires the device ID to be registered into the profile before deploying to the device. This causes a back-and-forth situation between the Apple Member Center and Xcode to be able to load new local devices. At the end of the day, it doesn’t prevent you from deploying the app, but it can be a major cause of frustration if the Provisioning Profile is not in sync with the device being used or the deployment features.

Once you are ready to send your app out to external testers, Android keeps things simple by not restricting the way you send the app out to the users. If you want to email the app to the user, just send it and all the tester has to do is enable debugging mode, drag-and-drop the app onto the device, and they are ready to go (no device IDs required). If you are looking for a more formal and traceable way of deploying the app, there are multiple external services you can use to send your app out to users on a version-by-version basis (HockeyApp, Apphance, etc). Most of these services will provide you with a way to get feedback and crash reports from the users testing the app, which can be extremely useful given the wide spread of device types the testers will be using.

Apple has taken a more integrated approach to testing recently (it’s about time) with its purchase of TestFlight in 2014. As of iOS 8, Apple has incorporated it directly into its development environment, for better or for worse. The advantage of this is that you can now deploy beta versions of your app directly from Xcode, and you no longer have to worry about getting the Device IDs from each user and adding them to the provisioning profiles (Apple handles this for you). Of course, Apple has managed to convolute multiple aspects of the setup process and test deployment (as it tends to do). The initial setup process of adding users to the testing process has moved to a hidden area of the Developer center, which now requires configuring the app and each user you would like to invite to the beta. Internal and external testers are now broken up into separate sections, and deploying to external testers requires approval from the Apple review board (more on this in the next blog post). Once you pass the initial headache of figuring out how it actually works and get everything up and running, it does run very smoothly and allow quick deployments straight from Xcode (a huge plus). There are external testing platforms still available on the iOS side, but with the way Apple typically is, I can see those starting to disappear (or just stop working) soon now that there is an official way of doing things.


Android iOS
+ Plenty of options for internal and external testing + Integrated testing system


+ No Provisioning profiles / Device IDs required for testing + Stable testing on devices
– USB Driver hunt for devices – Provisioning profile issues, even when debugging
– Debugging device disconnect issues – Convoluted initial setup for adding users to TestFlight
– Pay wall for device debugging

Stay tuned for Part 5: App Store Deployment

Download LMK
Download PubRally
Download Quack-a-pult

Read Part 1: Development Environment and Project Setup
Read Part 2: Language and Framework Features
Read Part 3: UI Design Tools and Controls
Read Part 5: App Store Deployment

Android vs. iOS from a Developer’s Perspective – Part 3

UI Design Tools and Controls

This is the third post of our five-part series on Android vs. iOS development from our microProduct Lead, Mark Oldytowski.

WYSIWYG designers have come a long way in the past few years. Even in the early 2000s, they never quite worked as intended. Therefore, many developers, myself included, would ignore them and just write the UI code by hand. In today’s world, they have finally started catching up to the rest of the IDE and become a valid, if not necessary, form of UI development. The mobile revolution has not ignored this movement and now both platforms come equipped with formidable UI design tools.

In Xcode, your UI development world is Interface Builder. In the latest versions of Xcode, there has been a move from XIBs, in which you build each screen in a separate file, to storyboards, where you build multiple screens together in a single file and map the transitions between screens within the UI. Multiple storyboards can also be used to break up large projects into more manageable chunks, which becomes necessary since storyboards are a nightmare to merge between developers. Care should be taken to break up the UI work into meaningful and properly sized storyboards so you can still reap their benefits, but prevent each one from becoming too large and difficult to navigate.

Interface Builder is fairly stable and feature rich, but is a little perplexing at first if you don’t go through a tutorial to figure out how some of the features work. Once you get the hang of attaching view controllers to storyboard elements and then dragging controls back from the storyboard to the view controller files to be able to reference UI controls in the view controller (confusing, I know), it becomes a very powerful and time saving tool. When developing for multiple screen sizes, interface builder allows you to adjust the size of each screen in real-time to verify that everything is up to par. If you are developing for both iPhone and iPad, you can choose to let the OS stretch everything, or develop separate storyboards for each device type to provide the most robust UI possible on each (but you will have to write more view controller code).

The layout designer within Android Studio handles all of your UI building tasks. In recent versions of Android, the interface consists of a series of activities and fragments. Activities consist of larger chunks of functionality (such as register the user account), while a fragment is typically a single task within an activity (capture username, capture account information, etc), and each are handled on a one file per fragment / activity basis. Referencing a control becomes a manual process in Android Studio vs. the simple drag and drop in Xcode (although auto complete does help). Android Studio also provides multiple device skins and sizes to use for the screen preview process, but in my experience they don’t always 100% accurately show what it will look like on an actual device of that size during runtime. One advantage that Android Studio provides over Xcode is that it builds the code behind file before displaying the visual preview, so any additional drawing operations will show up in the preview that were performed in the loading event of the code behind (these would be hidden in Interface Builder). This does lead to a few issues where the preview won’t appear due to missed references, but in all likelihood these are problems with your code that should be fixed regardless.

Comparing the controls on the two platforms is a matter for another day since each matches the style of the platform they were built for. For now, the only comparison to make is that the iOS controls just feel better. Take, for example, the date time picker control for each device: the iOS version feels very fluid and natural, while Android’s control feels like something built for Windows Forms in 2002. Of course, controls can be replaced, but you cannot believe how difficult it is to find a decent date time picker for Android (hint: it doesn’t exist). When it comes to modifying existing controls, both platforms suffer from the problem I have dubbed “Custom Control Syndrome”. Both severely limit the changes you can make to a control using the built in properties, so any significant change will probably require extending the control into a new class or building a control from scratch. Android has a few hacky ways to go up the visual tree to try to capture the control components and make modifications, but these aren’t considered reliable since any change to the OS potentially causes catastrophic app failure when you can no longer find the control up the tree.

A major annoyance that both platforms suffer from, but especially iOS is when the keyboard covers user input fields. Android puts forth an effort to move the screen up with the keyboard so the keyboard does not cover the focused control (it just doesn’t work correctly sometimes), but iOS doesn’t do anything about this issue. Every view that contains an input field will end up requiring code to handle moving the view around. Many other keyboard and focus issues exist on both platforms, but this one truly takes the cake.

Now for the real elephant in the room: Android device segmentation. With the huge combination of devices and screen resolutions running Android, I was actually very impressed with the way Android Studio attempts to solve this problem during development and how well it actually works in practice. Android uses the concept of dp (density independent pixels) to attempt to account for different pixel densities and screen resolutions. This works very well for devices that have a similar aspect ratio, but since Android has such crazy device segmentation, many of the screens on the UI still have to be adjusted to account for the odd aspect ratio here and there. This issue also comes into play with animations since there is no direct way to use dp when performing animations.  The result is that you have to compute everything manually. Performance can also suffer on underpowered devices (which Android has plenty), leading to screen tearing and dropped frames.


+ Solid UI Builder + Solid UI Builder
+ Compiles code behind into Preview + Linking with code behind is seamless (once understood)
– Linking with code behind is a manual and tedious process + Fluid default controls
– Sub-par default controls – Storyboard problems with big projects / teams
– “Custom Control Syndrome” – “Custom Control Syndrome”
– Device segmentation causes all sorts of issues

– Keyboard covers input fields

After that long, essay-like analysis between the two platforms, one feature sticks out in my mind as the reason I would pick one OS to develop on vs. the other: device segmentation (and it isn’t just because it was the last thing I mentioned, I swear). The IDEs are comparable, the frameworks have similar features, and language-wise, I can eventually get the same work done on both platforms, but what I can’t get past is the testing time involved in creating an Android app that actually works universally. It might be easy to get 95% there, or even 99%, but there will always be that oddball device out there that will cause you issues, and from a software engineer’s perspective, that is a tough issue to get past. Plus, with the terrible performance of the Android emulators, you would need physical devices for each screen size you test on, which will get pretty expensive. For all its problems, at least iOS is fairly consistent across all its available devices (for now). It has never taken more than a few hours to adjust for all of the available screen sizes, and performance for a run of the mill app is decent enough on all devices.


Stay tuned for Part 4: Testing and Debugging

Download LMK
Download PubRally
Download Quack-a-pult

Read Part 1: Development Environment and Project Setup
Read Part 2: Language and Framework Features