LensTag First Impressions

This weekend I started logging all of my camera gear with LensTag. Now that I have a young child, I can be easily distracted when out and about, increasing my risk of gear being stolen. LensTag seemed like an easy, cheap (its free!) method to keep track of it and get assistance if its stolen.

I installed the iPad application for LensTag and while the theory behind the service is great, I’ve had a number of issues with it right off the bat. Now mind, you, I’m sure someone will come along and tell me not to complain because LensTag is free, but since I’ve spent some time with it, I’d like to share my issues with it.

  1. The iPad app back button literally goes back like its a web page – navigation is off. I’ve pressed “back” too many times thinking and ended up at the sign in screen!
  2. Verifying Serial Numbers – I’ve had numerous issues where I take a picture and the application acts as if I didn’t. On top of that, nothing has been verified yet with no feedback. Turns out, I apparently should have read a blog post they wrote that says don’t take pictures of the boxes; it has to be pictures of the lens/camera. That would probably be a good hint to write on the screen in a “Tips” section on taking photos.
  3. The Used/New Flip Gets Stuck – when looking at the price of your gear as new/used, the thing gets in a weird state where its showing you new, but displaying used prices.

I think this service has great potential and I’ve grateful that it exists. I hope that application can get a bit smarter, and they can display some more helpful information about proper photos on the entry screen. Overall, though, minus navigation issues, its been easy to use and track my gear. Useful enough that I’m willing to go back and take new pictures of everything!

Swift 2.0 – Type Alias Declaration

As I continue my exploration of Swift 2.0 from a Java developers perspective, I’m going to highlight things that are interesting to me or seem to make certain concepts easier to follow/implement. As I’m not an expert in every language, highlighting it in Swift is not necessarily an indication that its “new” to programming languages in general, but new to me as I learn Swift 2.0.

One of the first things that jumped out to me was the Type Alias Declaration which is defined as:

A type alias declaration introduces a named alias of an existing type into your program. Type alias declarations are declared using the keyword typealias and have the following form

Essentially, its an easy way to reference “Int” as “Id”, etc. In Java, you could conceptually do this with your own classes by just extending/subclassing though it generates a lot of extra classes that serve minimal purpose (and you can’t do it for many of the existing Java classes).

Why does this matter? At least to me, it makes it easier and a non-issue to introduce domain-specific classes for identifiers. Often, an identifier is just a String or a Long, and its hard to justify creating a class to wrap the Long or the String so that you can name it the identifier. However, there are plenty of type-safe reasons for doing that (including preventing accidental mis-use in your code because you can catch those errors as compile-time versus run-time). In fact, for some reason projects I worked on, we did actually deem it important enough to introduce these classes. It created a bunch of extraneous code (we had an Identifier interface, a base class and then an interface and extension to the base class for each identifier-type). Once in place, its minimal effort to use, but it did generate a lot of classes just to wrap an identifier.

typealias seems like it would eliminate all of those conversations, extra/extraneous code, and make your code safer, all at once! At the moment, I’m just reading through the book, so this is obviously all “theory”, but I’m looking forward to trying it out in practice.

An Android Developer Learning Swift 2.0

I’ve spent a large amount of time in the Java world, as my school (Rutgers University) chose Java as the language to use when teaching us Computer Science fundamentals (they’re very explicit that they were not teaching us Java :-)) and then I moved on into traditional JEE application development (when it was still J2EE) and eventually into the Android world, which is mostly based on Java (unless you’re one of those people trying Dart or doing native development with the NDK). I’ve enjoyed my time immensely in the Java world, sometimes venturing out into other JVM languages with varying degrees of success (I’m a fan of Groovy in certain instances and Scala has been a bit of a letdown) but I thought it was time to go outside the “comfort zone”. Therefore, rather than learn yet another Android of Java framework (my resume is littered with random frameworks), I’ve opted to try and learn something completely different (though only time will tell how different it actually is). Therefore, I’m reading Apple’s Swift 2.0 book (I figured 2.0 was in Beta so it made more sense to read than the 1.1 version) to see how the other side lives and works. As I go through the book, I’ll be posting my thoughts on various parts of the languages: what I like, what I don’t like and what seems like it would be easier/harder in Swift vs. Java.

At least one thing can remain constant: Jetbrains has an IDE for Swift, and if its anything like IntelliJ, its probably awesome.

Writing Meaningful Code (Part 1)

First off, let’s be clear, I don’t meaningful in the sense that you’re changing the world, though if you can manage to do that also, please do! I mean, you’ve only got so much time in the day, so much of your coworker’s attention, etc. that no one wants to spend more time than necessary reading your code, and you don’t want to spend any more time than necessary writing it (I hope). I write a lot of code and I read a lot of code (whether via watching various repositories at work, or reading a plethora of code reviews), and I’ve noticed some patterns that I feel either waste time or reduce readability.

Whenever I meet a developer, I try to impress a few things on them:

  1. stop rewriting ALMOST the same thing
  2. Use the “final” keyword as much as you can (though I get equal parts love and hate on that one)
  3. When you want to refactor something, first determine if you’re refactoring it because its broken/unmaintainable/etc. versus “its not how I would have done it”.
  4. don’t build something where you don’t have at least one potential customer
  5. I’m tackle the last three at a different time, but I do want to focus on our desire to either rewrite the same thing with a minor difference (that almost always doesn’t matter) or the fact that we just copy/paste and it makes it difficult to understand code.

    I currently own a platform that we build a bunch of Android applications on top of. We limit the number of external libraries we use so that we have minimal impact on the DEX of our client applications. We end up “rolling our own” of some common utilities. For example, if you’re familiar with Spring Framework, they have their own Assert class that allows you to do some simple checks (most people use it for parameters). We wrote our own equivalent class to handle various scenarios (it was not feasible to bring in Spring Framework and pretty complicated to use something like AspectJ to do the validation work for us via annotations in an environment where you’re also supporting APKLIB). We made this class available to all of our clients. And we saw three things happen:

    1. clients used it
    2. clients refused to use it because they didn’t like the runtime exception we chose
    3. clients chose other patterns that resulted in basically big blocks of if statements.

    Kudos to the clients who recognized that this class was “good enough” and a useful pattern for reducing the amount of boiler-plate code for simple validation checks. Following patterns like this also make it easy for teams to contribute code to each other and for people to switch teams. All in all, a good usage of time, and keeping with the notion of spending your time writing meaningful code.

    The second category of people, in my opinion, are wasting their energy on things that don’t matter and making it harder for people to share code and work across teams. 🙂 Is there actually a huge difference in throwing an IllegalArgumentException versus an IllegalStateException? Is that actually how your team wants to spend its time? Now, if they want to contribute code back to the platform, they’ll need to change the Assert when they do a submission. If they are getting a contribution from another team, since they actually gave the class the same name (and many of the same methods), they’ll need to be extra vigilant in their code reviews (and probably incur more rounds of code reviews for these minor one-off classes).

    The third category of people, may suffer from not-invented-here syndrome. This particular team not only did not use the Assert, did not use their own custom Assert, but actually copy/paste a bunch of statements. For example, if they had three parameters, you might see three blocks of code like this:

    if (param1 == null) {
    throw new NullPointerException("param1 cannot be null!");

    if (StringUtils.isEmpty(param2) {
    throw new IllegalArgumentException("param2 cannot be empty");

    if (StringUtils.isEmpty(param3) {
    throw new IllegalArgumentException("param3 cannot be empty");

    instead of

    Assert.notNull(param1, "param1 cannot be empty");
    Assert.notEmpty(param2, "param2 cannot be empty");
    Assert.notEmpty(param3, "param3 cannot be empty");

    In the first block of code, it can you scrolling 12 lines before you might get to the “meat” of the code instead of three four lines in the second example (assuming you added a line break). You’re also going to create a bunch of branching logic that a code coverage tool will flag as not being tested (making it harder to determine if your actual code coverage). Finally, for anyone who joins the team, you’re going to make them frustrated that they’re spending their time writing boilerplate code when there’s a solution already available.

    In the last two examples, I would ask the developers if what they did felt like a good usage of their time and produced meaningful code (i.e. code that helped you solve the problem at hand). I think, if most of them answered honestly, they would tell me no. As developers we should strive to always be writing meaningful code to help ourselves grow and to move our business or project forward.

    (please note: I am aware that Google added a bunch of annotations that can generate Lint errors or have Android Studio flag possible errors. These were added after we wrote this class. In addition, the specific code is a minor detail of the overall point).

The Value of Saving Photos You Liked

I went to Italy a few years ago (October 2010) and we had spent a few days at a bed-and-breakfast style place in Tuscany. While we were there, I woke up early one morning and went out to take some photos around sunrise. The entire time we were here, it usually pretty hazy. I ended up taking a bunch of photos I “liked” in terms of composition, but disappointed me in terms of haze (these included some early morning shots as well as some shots from the highest point at the bed and breakfast). When I got back, I saved a few of the hazy photos but deleted a bunch of them, because I figured they were unrecoverable (possibly partially recoverable if I had been better at Aperture). I recently switched to Lightroom CC and so I got the upgrade with the “Dehaze” feature. Now, I’m pretty sure someone can accomplish the same changes without using “dehaze” but for someone like me, who has had limited time to learn Lightroom, this is a lifesaver. More importantly, its a photo-saver :-).

Here’s the before/hazy photo:
Original, hazy photo

I applied +65 dehaze too it (with a minor adjustment for contrast as well as lens correction), and here’s the new version:
The new, dehazed version.
Total time spent was about 15 seconds. I am pretty sure if I spent more time, the image could be even better. The important lesson for me here though was I should save any photo I like the composition of because future software may be able to fix some items that I couldn’t account for when taking the photo (it most likely will not be able to save my blurry messes).

Living with Amazon Echo (for a few days)

Please note that while I am an Amazon employee via one of their subsidiaries, the Echo is my personal Echo, paid for by me, and my opinions are clearly my own and do not represent my employer.

I received my Echo last Friday, a few days earlier than Amazon promised (always awesome when that happens) and I was excited to set it up. Set up failed miserably; I tried set up from the Android app, a Kindle, an iOS app, and the web site itself on my Mac (in two browsers). Wasted about a half hour before I called tech support. They were promptly able to get my device back in working order by having me do a factory reset … you now, right out of the factory. After that everything worked great and we have been off and running.

At first it’s a little weird having the Echo around. You can’t talk about the Echo using its “name” without triggering it to want to help you. You also tend to yell to it even though it has like eight microphones or something. You quickly learn not to yell and it then becomes more pleasant to use. We got all the novelty stuff out of the way quickly, asking all sorts of random trivia questions, etc. We then lived with it under normal circumstances.

Shopping List
We usually think of things we need while we are either holding the baby (I.e. Ran out of diapers!) or while cooking (I.e. Ran out of spinach). It sounds minor but just being able to update the shopping list in the moment and having it synced across devices in the Echo app is a huge step forward for us in remembering items we need to purchase. We were previously using Cozi (an okay app where we didn’t need any of the other functionality) or a mini whiteboard that we took a picture of before we left the house. The ability to do that with the ToDo list is not as useful for me.

Asking it random trivia gets old pretty quickly (except when demoing it to new people) though it might help you win Jeopardy more often. Getting answers to relevant questions in the moment is highly useful, especially when you don’t need some lengthy dissertation. The immediate examples are obviously math or conversions of units but even the ability to essentially have an audible “second screen” when watching a show or doing some activity to provide some information or context.

Probably a personal thing but I like to just be able to ask the weather or the time. Earth-shattering, no.

I love being able to ask for music by artist, album and genre and having it play. I find artist and genre to be the easiest to deal with assuming you want a shuffled mix. Album is a bit more difficult as it’s the only way to get music to play in order but the odds of you remembering album names is probably not highly likely for some of your less frequently listened to bands. Echo doesn’t seem to be able to list them for you so that you can be reminded. You can always use the app but it feels like cheating when voice normally works so well.

I also have a Sonos system in my house. I don’t plan on getting rid of it any time soon. I sense that it’s probably a better overall system. However, if I just want some random background music (I.e. Alexa, play Norah Jones) the Echo is more than sufficient.

I also like being able to easily get a good number of radio stations as well as a collection of recent news.

A seemingly minor feature, but again, just verbally going “Alexa, set timer for 5 minutes” is just too convenient to not mention.

Future Potential
All of the stuff I mentioned is useful, convenient and fun but overall not earth-shattering from a feature perspective (though the technology that makes it work so well probably is) but it gives a glimpse into the future potential of devices like this. Imagine if your entire house could respond to your commands. Currently for a lot of home automation stuff you’ve either got sensors, control panels or an app. Sensors are not always accurate, control panels can be unwieldy and apps just take way too long. But imagine if you could walk into a room and control the lights, etc. with your voice. I would sign my house up for that.

At the end of the day, I’m quite happy with my purchase even if right now it’s borderline novelty with a lot of future potential. Constantly being in a position where I don’t have two hands free makes it so much more apparent that we need other mechanisms. Voice could also be solution for allows less tech-savy folks to utilize some of the items in your house. For example, I’ll never get a Sonos app on my mother-in-laws phone, but I can get her to say “Alexa, play classical music”.

The mystery of slow connections and Backblaze…

First, let me start out by saying, I love Backblaze and gladly pay them to back up two computers a month, and its completely worth the money for the peace of mind it provides (and in theory the eventual recovery should there be a problem). This post is merely about helping people who might be in a similar situation as mine where you’re having problems and the tech support for your ISP is less than helpful (not in a mean, hate the customer way, but in a we can’t help you diagnose kind of way).

About three months ago, I started having connectivity problems at night, with commands like “ping google.com” timing out (70 – 80% packet loss) from multiple computers and directly from the router. This often happened during peak times on a shared connection (cable in a townhouse development) so the automatic assumption was noise or other congestion on the line. Whenever there was a problem, I would note the time and dutifully contact Optimum, who to their credit would send a technician out every time to look. Even though I prompted on whether there was any tests I could run in that moment for them, they always opted to send out a technician. That’s nice, but when they come the next day, its not quite helpful. They would do their standard swapping of every piece of hardware and running of tests, and they could never find anything.

I was getting fed up as it was affecting our ability to watch shows, do large downloads, etc. I noticed however, I never had this problem when using my work laptop, it was only the personal computers (my wife’s and my iMac). This was a little hard to determine at first because I wouldn’t often have my personal and work computer on at the same time, so I would never see it degrade when my work computer was on. There happened to be an instance when they were both on, and I thought at first that maybe I download a trojan or some other nefarious thing. I started looking and couldn’t find anything but I could confirm the problem happened when my iMac was on.

Then it hit me, my back ups. If I do a photo shoot, I can often have 20gb of RAW images that need to be uploaded. Backblaze was set to go full-throttle, meaning up to 5mbps. My connection upstream is unfortunately only 5mbps, meaning I was saturating the upstream (it was actually using 4.85mbps according to the Backblaze software). Apparently this saturation delayed legitimate packets, meaning the request/replies were timing out.

I throttled back Backblaze to 2mbps for now, and everything is quite happy. I’m experimenting with two options:

  • Upgrade to a higher plan (the next plan is only an extra $5 a month and since we only use Internet (no TV) it seems like it could be worth it)
  • Looking at QoS on my router to ensure that HTTP traffic is always prioritized

Bottom line though is that if you’re seeing connectivity problems and you’ve eliminated the normal possible hardware culprits, take a look at what is going up/coming down. You may find something is using all of your bandwidth. I wish the Optimum technicians (when I was on the phone with them) could have pointed this out to me, as I would have been able to resolve this issue without them sending any technicians (saving all of us time and money). Oh and always make sure to back up, whether its local or remote. 🙂

Scrum & Release Dates

A few weeks ago, I had an interesting discussion with one of our product managers. He was working with a team (that does scrum) on a new web site and he wanted an estimated release date from them. Instead, they gave him a date for when they would have a better idea on what the release schedule would be. This “date” was essentially about three sprints out, when they were more comfortable with their velocity. But even then, its probably completely and utterly wrong. As in not correct. But the question I asked, was “what are you going to do with the release date? Who, this far out, is depending on it?” And this question essentially highlighted the problem with companies that haven’t fully embraced scrum. Your scrum team eventually runs into a team that is more or less doing waterfall release planning (apparently non-technical teams can do waterfall). And how do you manage that? Its not an easy question to answer because teams that tend to follow a waterfall-style are much more reliant on those release dates and like to hold you to them.

Typically if your team has completely embraced scrum, you’re always generating potentially releasable sprints (whether you bother to release them to your customers or not). That means your product owner can do a “release” whenever his minimal viable product (MVP) is hit or release whatever you’ve done by his or her deadline. If the related teams are doing scrum (marketing, etc.) they can use the outputs of the development sprint as inputs into their sprints (okay, once this feature is done, we can start preparing this email, or getting X done). With this method, you release when you’re ready with confidence that what you’ve built is actually what is needed. It causes downstream teams to be more agile also, not expending energy on things that haven’t been done yet (to use the marketing example, not crafting creative or emails for features that the product owner might have dropped).

What happens when one of those downstream teams is not agile? And they’re not willing to become more agile? There’s no clear or obvious answer. You can attempt to withhold information, put extensive warnings around it, or be vague (i.e. the product might be released in Q2 of 2014, but we’ll get you a better date once we get closer) but that’s not really ideal. From what I’ve seen, the best way is to continue to treat them as if they are operating in an agile fashion. Communicate with them frequently, request feedback, revise information iteratively, etc. You can’t force them to be agile, but you train them to be more responsive. And then maybe, just maybe, they’ll stop asking for release dates that we all know are completely made up and we can get back to shipping products when they’re ready.

Who’s Responsible for Ensuring Unit Tests Get Completed?

My organization is looking to improve its operational excellence both from the perspective of responding to and resolving customer issues more rapidly, de-escalating issues faster, and ultimately preventing things from becoming issues once in production. While we look at a variety of metrics, one of them is code coverage. There are plenty of articles out there already on the value or lack there of around unit tests (and code coverage) as well as what having unit tests really provides you. In addition, there obviously diminishing returns in adding unit tests after a certain level of coverage (my personal thought on that is the sweet spot is between 70 – 80%).

Like most organizations we regularly measure this result, typically automatically via our build server (which collects various other metrics that may or may not be of importance such as Lint). It recently came to light that some of the projects were no where near what would typically be considered adequate coverage to give you confidence you covered the use cases and could refactor without heavy manual testing (in this case the coverage was closer to 20%). Various conversations ensued and one of the comments was (paraphrasing) that the reason there are no unit tests is because the product team did not give them time. Having taken the Certified Scrum Master training recently and knowing that we work in an agile environment (that sort of follows scrum), I thought it was an interesting comment. Ultimately, the only people responsible for writing tests and ensuring they get written are the developers.

You’re thinking, “no shit, Sherlock, you’re not going to have your product owner write the tests. But that doesn’t mean they’re giving you time to write them!” Stick with me. In scrum, the product owner tells you what needs to be done via their prioritized back log, but they can’t tell you how to do it, or how long it should take. Only the team can determine that. They control their definition of done. If they feel unit tests are needed to demonstrate a feature or consider it complete, they should be including unit tests as a task to complete their story (definition of done). If you’re not completing unit tests that typically means one of a couple of things:

  • you don’t actually buy into your organization’s belief in unit tests so you’re not including them in your definition of done
  • you’re considering them a nice-to-have (and thus optional) and most likely dropping them when your velocity drops so that you can complete the “feature” part of another story to say you got all the stories done
  • you’re under-estimating how much time you need to get them done and falling into the same trap as above
  • you’re letting someone else dictate your definition of done, which does not include unit tests

Fortunately, all of those are solved by the team. Including unit tests and allocating time is certainly within the realm of things that the team can control. External pressure to include a smaller definition of done should be escalated to the scrum master to help provide resolution as no one should be defining done from a “how” perspective besides the team. The team should not be compromising on their definition of done in order to feel like they’ve accomplished more. It only hurts them in the long run.