The mystery of slow connections and Backblaze…

First, let me start out by saying, I love Backblaze and gladly pay them to back up two computers a month, and its completely worth the money for the peace of mind it provides (and in theory the eventual recovery should there be a problem). This post is merely about helping people who might be in a similar situation as mine where you’re having problems and the tech support for your ISP is less than helpful (not in a mean, hate the customer way, but in a we can’t help you diagnose kind of way).

About three months ago, I started having connectivity problems at night, with commands like “ping google.com” timing out (70 – 80% packet loss) from multiple computers and directly from the router. This often happened during peak times on a shared connection (cable in a townhouse development) so the automatic assumption was noise or other congestion on the line. Whenever there was a problem, I would note the time and dutifully contact Optimum, who to their credit would send a technician out every time to look. Even though I prompted on whether there was any tests I could run in that moment for them, they always opted to send out a technician. That’s nice, but when they come the next day, its not quite helpful. They would do their standard swapping of every piece of hardware and running of tests, and they could never find anything.

I was getting fed up as it was affecting our ability to watch shows, do large downloads, etc. I noticed however, I never had this problem when using my work laptop, it was only the personal computers (my wife’s and my iMac). This was a little hard to determine at first because I wouldn’t often have my personal and work computer on at the same time, so I would never see it degrade when my work computer was on. There happened to be an instance when they were both on, and I thought at first that maybe I download a trojan or some other nefarious thing. I started looking and couldn’t find anything but I could confirm the problem happened when my iMac was on.

Then it hit me, my back ups. If I do a photo shoot, I can often have 20gb of RAW images that need to be uploaded. Backblaze was set to go full-throttle, meaning up to 5mbps. My connection upstream is unfortunately only 5mbps, meaning I was saturating the upstream (it was actually using 4.85mbps according to the Backblaze software). Apparently this saturation delayed legitimate packets, meaning the request/replies were timing out.

I throttled back Backblaze to 2mbps for now, and everything is quite happy. I’m experimenting with two options:

  • Upgrade to a higher plan (the next plan is only an extra $5 a month and since we only use Internet (no TV) it seems like it could be worth it)
  • Looking at QoS on my router to ensure that HTTP traffic is always prioritized

Bottom line though is that if you’re seeing connectivity problems and you’ve eliminated the normal possible hardware culprits, take a look at what is going up/coming down. You may find something is using all of your bandwidth. I wish the Optimum technicians (when I was on the phone with them) could have pointed this out to me, as I would have been able to resolve this issue without them sending any technicians (saving all of us time and money). Oh and always make sure to back up, whether its local or remote. :-)

Scrum & Release Dates

A few weeks ago, I had an interesting discussion with one of our product managers. He was working with a team (that does scrum) on a new web site and he wanted an estimated release date from them. Instead, they gave him a date for when they would have a better idea on what the release schedule would be. This “date” was essentially about three sprints out, when they were more comfortable with their velocity. But even then, its probably completely and utterly wrong. As in not correct. But the question I asked, was “what are you going to do with the release date? Who, this far out, is depending on it?” And this question essentially highlighted the problem with companies that haven’t fully embraced scrum. Your scrum team eventually runs into a team that is more or less doing waterfall release planning (apparently non-technical teams can do waterfall). And how do you manage that? Its not an easy question to answer because teams that tend to follow a waterfall-style are much more reliant on those release dates and like to hold you to them.

Typically if your team has completely embraced scrum, you’re always generating potentially releasable sprints (whether you bother to release them to your customers or not). That means your product owner can do a “release” whenever his minimal viable product (MVP) is hit or release whatever you’ve done by his or her deadline. If the related teams are doing scrum (marketing, etc.) they can use the outputs of the development sprint as inputs into their sprints (okay, once this feature is done, we can start preparing this email, or getting X done). With this method, you release when you’re ready with confidence that what you’ve built is actually what is needed. It causes downstream teams to be more agile also, not expending energy on things that haven’t been done yet (to use the marketing example, not crafting creative or emails for features that the product owner might have dropped).

What happens when one of those downstream teams is not agile? And they’re not willing to become more agile? There’s no clear or obvious answer. You can attempt to withhold information, put extensive warnings around it, or be vague (i.e. the product might be released in Q2 of 2014, but we’ll get you a better date once we get closer) but that’s not really ideal. From what I’ve seen, the best way is to continue to treat them as if they are operating in an agile fashion. Communicate with them frequently, request feedback, revise information iteratively, etc. You can’t force them to be agile, but you train them to be more responsive. And then maybe, just maybe, they’ll stop asking for release dates that we all know are completely made up and we can get back to shipping products when they’re ready.

Who’s Responsible for Ensuring Unit Tests Get Completed?

My organization is looking to improve its operational excellence both from the perspective of responding to and resolving customer issues more rapidly, de-escalating issues faster, and ultimately preventing things from becoming issues once in production. While we look at a variety of metrics, one of them is code coverage. There are plenty of articles out there already on the value or lack there of around unit tests (and code coverage) as well as what having unit tests really provides you. In addition, there obviously diminishing returns in adding unit tests after a certain level of coverage (my personal thought on that is the sweet spot is between 70 – 80%).

Like most organizations we regularly measure this result, typically automatically via our build server (which collects various other metrics that may or may not be of importance such as Lint). It recently came to light that some of the projects were no where near what would typically be considered adequate coverage to give you confidence you covered the use cases and could refactor without heavy manual testing (in this case the coverage was closer to 20%). Various conversations ensued and one of the comments was (paraphrasing) that the reason there are no unit tests is because the product team did not give them time. Having taken the Certified Scrum Master training recently and knowing that we work in an agile environment (that sort of follows scrum), I thought it was an interesting comment. Ultimately, the only people responsible for writing tests and ensuring they get written are the developers.

You’re thinking, “no shit, Sherlock, you’re not going to have your product owner write the tests. But that doesn’t mean they’re giving you time to write them!” Stick with me. In scrum, the product owner tells you what needs to be done via their prioritized back log, but they can’t tell you how to do it, or how long it should take. Only the team can determine that. They control their definition of done. If they feel unit tests are needed to demonstrate a feature or consider it complete, they should be including unit tests as a task to complete their story (definition of done). If you’re not completing unit tests that typically means one of a couple of things:

  • you don’t actually buy into your organization’s belief in unit tests so you’re not including them in your definition of done
  • you’re considering them a nice-to-have (and thus optional) and most likely dropping them when your velocity drops so that you can complete the “feature” part of another story to say you got all the stories done
  • you’re under-estimating how much time you need to get them done and falling into the same trap as above
  • you’re letting someone else dictate your definition of done, which does not include unit tests

Fortunately, all of those are solved by the team. Including unit tests and allocating time is certainly within the realm of things that the team can control. External pressure to include a smaller definition of done should be escalated to the scrum master to help provide resolution as no one should be defining done from a “how” perspective besides the team. The team should not be compromising on their definition of done in order to feel like they’ve accomplished more. It only hurts them in the long run.

Android Logging with Log.isLoggable

I’ve been working on some code that utilizes the Android standard Log class directly, and some code that utilizes SLF4J’s Android support, and I’m noticing some interesting behavior. The SLF4J support dutifully checks whether that level is Loggable before it decides to log (i.e. Log.isLoggable(TAG, DEBUG)) which effectively limits the amount of items written to logcat, and also reduces the construction of strings (since it won’t process its messaging formatting stuff). Android’s Log class does not explicitly check when you output to logcat. Therefore, if you’re not manually wrapping your Log calls with isLoggable, regardless of the level, it will log to logcat. Therefore, even if DEBUG is disabled, if you don’t wrap the call in isLoggable, it will always be output. This causes a few problems:

  • confusion – having your DEBUG level output sometimes is going to confuse anyone comparing the output to the code
  • inefficiency – its not supposed to be output, but it is, along with all the object construction that goes with it.

There is one benefit to not checking the logging level and that is not having to deal with the weird “setprop” stuff (as instructed here).

While the Android Log code is less than ideal, SLF4J brings its own set of issues. If you have strict mode checking on, the first attempt to find the static logger (scanning the jars) can trigger an “Application Not Responding”. Other than that, I’ve found the SLF4J facade, combined with either SLF4J-android or Logback-Android to provide the ideal level of logging control.

Note: to get around the ANR with SLF4J, I actually skip the logger factory and reference do the following:

StaticLoggerBinder.getSingleton().getLoggerFactory().getLogger(RandomClass.class.getName());

I don’t know if that’s the best thing to do, but it seems to be working for me.

When Millions of Dollars Gets You…well…Crap.

Much has been written about the mediocre Samsung/Jay-Z application “Magna Carta Holy Grail” which is apparently no longer in the Google Play application store as far as I can tell. Working for a company that has a focus on customer obsession and being an Android developer, I thought I would add a few thoughts to the mix (though once again, my opinions are my own). To be perfectly blunt, the application felt like it was designed by a bunch of first graders (no offense to first graders) in both its technical implementation as well as its development of a product.

  • Requesting more permissions than it needs – According to The Guardian:

    The app – which has since been removed from the Google Play store by Samsung – was described by the technology site Ars Technica as “positively PRISM-like in its requests for your information”, with fans prompted to agree to a number of app permissions before installing it.

    Some of those permissions were necessary: for example, to store the downloaded files on fans’ handsets.

    Others, such as its request to access the device’s location and information about other apps running on the phone, and to read the phone’s status and identify when it’s being used for voice calls, were more questionable.

    Jay-Z needs to know where I am? I didn’t see any geo-location benefits in this application. Why does Samsung need to know that I’m playing Angry Birds?

  • Friend spam – To do anything in the application you had to “unlock” it first which typically means spamming your friends on Twitter or Facebook with the “I just unlocked X on Jay’z app! Yay! Be Happy for Me!” You’re basically handing your customers a crippled application and saying if you want to do anything you have to harass people. A more customer-focused method would to make this stuff available regardless and let the customer determine if its worth tweeting (and providing something meaningful for them to “share”). Letting people know you unlocked lyrics isn’t helpful to anyone. In fact, your poor iPhone friends will never be able to get the same enjoyment as you are. Yes, they made a questionable choice with their iPhone (I kid!) but the way the application shares stuff, you’re not offering a way for them to share in your excitement. Let’s be honest, they’re not going to run out to buy a Samsung Galaxy S4 so that they can see the lyrics. Some intrepid person is going to get the PDFs and post them anyway.
  • Developmental Laziness or Product People Asleep at the Wheel – I’m not sure if this falls under developer laziness or product people saying “GET THIS DOWN NOW!!!!!!!!!!”. But really, once you’ve unlocked the above lyrics, its some weird PDF view which gets into some weird zoom state and its pretty much impossible to actually read the lyrics. Android has this thing called a TextView. On top of that it then allows you to actually read the lyrics off-line. Crazy, I know. You may not want to constantly use your data connection.
  • App Crashes – Enough said. Its annoying. Its dumb. It was constant.
  • Server Crashes – You’re Samsung. You just paid Jay-Z millions in some grand marketing scheme. You can’t spend an extra million to make sure your servers stay up? (or that you hire competent developers?)
  • Internet Connection Required – Really? Your entire application is what? A proxy to some remote servers? If so, please make sure your servers can handle the load and that your application can properly retry. Its annoying to load an application that shouldn’t require a connection and find out it won’t load. On top of that its annoying to have to basically Force Close your application because it can’t properly recover from some weird network connectivity issues.
  • Internet Connection Required, Part 2 – I finally managed to download the music in about 20 minutes before I boarded a plane (and trust me this was at like 5am, so the servers should have been relatively untaxed). I’m like yeah, five hour flight, new Jay-Z, this will be awesome. Oh wait, I can’t play the music because the app requires a connection. #$#@$@#$!
  • Music Player Integration – When I first downloaded the music it was not being pulled into my normal music player which caused my above problems on said plane. This appears to have been fixed. But really? Who even thought not having this at first was a good idea? I’m not going to go to some special magical place to listen to Jay-Z. He’s not that awesome.

Apparently even Jay-Z thought this was not cool. Most of this was avoidable and was obviously the result of people making bad choices. Customers will do a lot for free, but that doesn’t mean they should have to, especially if you want them to pay attention to you the next time. You’re not helping your brand image nor generating tons of customer good-will.

First Impressions: Android Studio (I/O Preview) and the Gradle Build System

Before I joined my current company, I was an avid JetBrains IntelliJ IDEA user, both professionally and personally. When I joined my current company it turned out that most of their internal tools had plugin support only via Eclipse and I settled in to start understanding and accepting Eclipse again. I even started to use it personally because I was messing up all the keyboard commands. I’ve switched teams a few times in my current company (from ACX, to the web team, to Discovery, and now to the Android team). When I switched to my most recent team we were using Eclipse with the ADT plugin, as well as the Android Ant-based build system. It was pretty atrocious getting everything to work well together. When Google announced its new build system, we decided to jump on it, thinking its better to be ahead of the curve and not stuck on the mediocre Ant build system.

We’ve been pretty happy with the new build system, using plugin versions 0.2 and 0.3 mostly (0.4 was announced on the same day as Android Studio). We’ve had to accept some of its short-comings or work around certain things:

  • lack of integration with the Java plugin – this lack of integration means that certain plugins (such as Cobertura, Licensing, etc.) cannot be currently used when the Android build system is in play. This is apparently because the Android plugin doesn’t use standard source sets which are required for proper compatibility. The Android team is apparently working with Gradleware team to make required changes to get this compatibility back.
  • its in flux – every plugin revision (0.2 -> 0.3 -> 0.4) has required some change to our build.gradle files in order to actually now work. The changes typically aren’t huge but depending on how quickly you jump on the new plugin, they may not be documented yet. I’ve found that having the source code checked out combined with generating debug level messages for a Gradle error has allowed me to figure out what changed.
  • lack of Eclipse support – The Eclipse plugin as of right now had not been upgraded to work with the Gradle-based build system. This is not much of an issue now that Android Studio is out, but if you prefer Eclipse (apparently still needed for Native support), you’ll need to accept a few things:
    • You can’t completely use the new Gradle directory structure – Eclipse expects the AndroidManifest.xml and the /res directory to both be off the root. Placing them anywhere else causes the Eclipse plugin to complain.
    • You’ll need to roll your own additional Eclipse configuration. I had additional configuration in place to look for the existence of src/main/java, src/main/aidl, src/test/java, etc. and manually add them to the Eclipse XML file. Similarly, you need to add the required Eclipse project natures, etc. manually. We’re migrating to Android Studio so I’ve deleted that code now and moved our files to their proper locations.
  • Lack of understanding of typical configurations – we had to define the testCompile configuration in order to have separate configurations. I expect this to go away with proper Java plugin integration.
    configurations { testCompile }
    

As you can see, we’ve encountered a few issues that we either worked around or accepted. You obviously may encounter more or less issues depending on your project complexity. Overall, I think jumping on the Gradle plugin has made us more productive:

  • Proper project dependencies definition – easy for us to tell what we’re depending on plus we’re not relying on binaries in source control
  • better ecosystem and configuration – plugin availability (especially once Java integration works) along with a clear concise way to script things or reference plugins (I always found it personally annoying to rely on an Ant task).
  • industry standard directory structure – whether you love or hate Maven (I happen to be okay with it), their standardization/conventions for directories is quite handy and nice.

The Gradle plugin on its own is nice, but combined with Android Studio, my developer happiness has skyrocketed. Usage of the Android Studio has been quite pleasant so far (though the IDEA key commands are only slowly coming back to me) and its obvious some pretty decent effort has been put into the Gradle integration. That said, some things I’ve encountered so far:

  • Surprise, it needs Gradle 1.6 and plugin 0.4! – somewhere people forgot to make the 0.4 announcement. Its in the Maven repositories though. Plan on using it if you want to use Android Studio
  • Closing and Opening a Project – I’ve gotten some weird dependency stuff or that the project doesn’t auto-update. Closing/re-opening solves all these issues. I need to confirm if I forgot to turn on auto-import (normal IDEA has that for Maven projects) or if there is some weird bug.
  • Modules depend on the Android Library – I’ve found that modules that depend on other modules are relying on the compiled Android Library file, which has basically forced me to associate that binary file with the source project manually in order to click through to the source. Strange. I am assuming it will get fixed.
  • build directory clean up – I found in a few instances it wasn’t cleaning up build directories correctly. I had deleted some AIDL files and the compiled/generated code was still in the build directory causing confusion when I went to rename something.

Even with these issues, if you were a former IntelliJ IDEA user who settled for Eclipse while doing Android development, I encourage you to not be scared of the “0.1″ version number and give it a try. Even with these minor issues I’ve been much happier (and also not getting out of memory exceptions like I was in Eclipse despite giving it 2gb of memory!). I’ll continue to post my impressions as I use it longer and continue to upgrade Gradle versions and Android Studio versions.

ASUS Transformer Infinity TF700 10.1-inch Tablet and Developer Options

Hopefully, this can hep out other people. I was looking to manually install some Android APKs onto my personal ASUS TF700 tablet, and had to enable the appropriate options in the “Developers” sub-menu in settings. Unfortunately it was not to be found. Scanned through all the settings at least 10 times, thinking it was me being dumb. Searched to see if ASUS has sneakily disabled it in recent builds (I am on 4.2.1) and how to re-enable it. Turns out its not disabled, but hidden by default. The trick?

To enable the Developers options to be seen on the ASUS, you MUST:

  1. Go to Settings->About Tablet
  2. Tap on the build number 7 times (the thing will count down for you so that you know its having some affect.

Doing that will make the Developers menu re-appear. Why is it like that? Well, I have no clue.

Source: http://www.transformerforums.com/forum/transformer-pad-infinity-general-discussion/35720-solved-developer-options-setting-gone-missing.html

Be Useful

When its your job to reach out to people who probably have less time than you to chat, its always a good bet that you should provide useful information if you want them to contact you back. Tell me, why I am I going to spend 15 minutes of *my* time chatting with you when you’ve given me no reason to:

How are you? I noticed your profile on LinkedIn and was impressed. I have a terrific position that looks excellent for your background. Can we talk?
If you’re not interested, please just simply archive my message.

Yes, I am going to call you back when you’ve provided me no useful information about this position, location, what part of my background makes it an excellent fit, etc. Respect people’s time if you want them to take you seriously.

Android Bricking

My new favorite permission, per file: https://github.com/android/platform_frameworks_base/blob/master/core/res/AndroidManifest.xml

<!-- Required to be able to disable the device (very dangerous!). -->
    <permission android:name="android.permission.BRICK"
        android:label="@string/permlab_brick"
        android:description="@string/permdesc_brick"
        android:protectionLevel="signature" />

Review: Spring Security 3.1 by Robert Winch and Peter Mularien

I was given a copy of Spring Security 3.1 by Robert Winch and Peter Mularien to review on my blog. A few disclaimers to get out of the way first:

  • I was/am a Spring Security committer (I think I still technically have access but don’t actively develop right now)
  • My copy of the book was free (electronic version, compatible with Kindle)
  • Speaking of Kindle, I do work for Amazon/Audible, however, my opinions here are my own.
  • I was the technical reviewer for the Spring Security 3 version (but not for 3.1)
  • I am a committer and member of the steering committee for Jasig CAS

To make this easier, I want to get the non-content comments out of the way first. I read this on a Kindle Fire (non-HD version), meaning I believe the screen is about 7 inches. This is rather small for many of the tables unless you want to rotate the screen when viewing the table. Tables get cut off in portrait mode with no real way scroll to the right. Attempting to zoom in on some of the images also causes all sorts of weirdness (things actually seemed to get smaller), though I don’t know if that was an artifact of the book or the Kindle itself. I found the flow diagrams to be a bit amateurish in design. Finally, I did notice a few typos in some of the content, but for the most part the text or style was not distracting. None of this, minus the grammar/typos would affect readers or larger tablets or the paper edition.

With that said, let’s just straight into a review of the content. The book covers the basics of Spring security, authentication (CAS, LDAP, X.509, OpenID, JDBC, etc.), authorization (ACLs, RBAC, etc.), and some of the features specific to Spring Security (its session-fixation support, etc.). The first question that always comes up is who is the target audience for this? I read pretty much the whole book (I will admit I did skip a few XML definitions), and its definitely geared towards people who want a broad overview, and then want to jump straight into integrating the security they need (whether it be LDAP, X.509, etc.). If you’re familiar with Spring, and don’t care about the details or heavy customization, you could just straight to the specific chapter you’re looking for and be done. I guess that makes this in many ways more of a cookbook style. They do go in-depth on some of the harder/less straightforward sections (i.e. authorization, ACLs, etc.).

Its very clear that the authors did significant research on each topic they presented (I was particularly impressed with their section on Jasig CAS). You never left a chapter thinking that they left something unexplored. On the other hand, sometimes the amount of information was overwhelming because its not something you would normally need. The authors sometimes struggled with the level of detail that they needed to provide, going in-depth on topics that would have been better left to the reader to use their favorite search engine to find more information about. In some instances, they overwhelmed the user with details on the harder way of doing something, when just introducing the simpler way would have been better.

Each chapter on authentication did a good job of giving a decent explanation of the authentication method itself (which is conceptually independent from Spring Security), the architecture of the feature, and how to configure it. If you were not reading this as a cookbook (i.e. just jumping to relevant chapters), but as a book to learn security concepts, I feel like it would be a decent primer on many of the various authentication methods, when they are useful, and their pros/cons.

The book does a good job of explaining some of the other features of Spring Security, detailing the myriad of options, trade-offs, and extension points. Reading the chapters on session fixation, remember-me, ACLs, I felt that I gained a good understanding on how they worked, how to configure them, and their trade-offs. The ACLs chapter was a bit overwhelming, but then that feature has been a bit of a mess since day 1. I have no plans on ever using JSF, so I skipped that section :-).

The one thing that did disappoint me on the book was that if you were a previous owner of the Spring Security 3 book, I didn’t see much to make you want to go out and purchase this new book. I didn’t find the appendix in the back of the book to be a compelling enough reason.

Pros:

  • In-depth coverage of each Spring Security topic, providing an overview, architecture, and configuration
  • The details can help you make relevant/educated decisions beyond just Spring Security configuration

Cons:

  • Formatting issues in Kindle edition
  • Sometimes the information is too overwhelming/irrelevant to learning Spring Security; could Google/Bing it if really wanted more information
  • No compelling reason for Spring Security 3 owners to upgrade that I could see

Bottom line: if the Spring Security 3.1 reference guide published by SpringSource is not cutting it for you or you need some basic guidance on authentication/authorization techniques, this is probably your best choice. If you’ve already got an infrastructure in place, and you just need some basic copy/pasting of configuration, then this may be overkill.