Tuesday, December 23, 2014

Read the Freaking Documentation to Couchbase!

Just wanted to point out a pretty awesome product with pretty awesome documentation:

Couchbase.

We're using it for a pretty specific use case and for that, it's handling things extremely well.  However, we noticed that it was using a ton of swap space to the point I was thinking we were on Windows boxes.

Buuuut lo and behold we didn't read all the documentation on how our nodes should be set up and here it is for v2.2 that we're using.

http://docs.couchbase.com/couchbase-manual-2.2/#swap-space


Actually that screenshot is further down in the issues section, complete with impact and JIRA issues!

It's actually pretty awesome to be able to track something down this easily.  Obviously, other people have encountered this that has lead to issues but it's great to see the trail and the resulting recommendation.

Sunday, December 07, 2014

Searching Pull Requests in Stash - You Don't

A pretty popular JIRA Issue for the Stash project is this guy, STASH-3856:


Specifically, the only reason why I'm searching for a pull request after the fact is to see if there were any conversation regarding a particular design decision or if there were any explicit things that were abstained from for later work.

Also, it's a super quick way to see when a given feature was merged into a particular branch like Master.

With that said, chances are, you're using JIRA alongside Stash.  Let's be honest, the only reason to use Stash over other solutions is for its JIRA integrations.  If your issues are directly associated with your Pull Requests, you should then be able to easily search your JIRA issues using its robust search features.

JIRA Development Panel
So unfortunately, if you're eagerly awaiting STASH-3856 to be implemented, I highly doubt that they're going to accomplish this any time soon.  There are definitely some Stash specific searches that would be nice, such as when a pull request's been merged or dismissed and when it's been updated.  However, until then searching for the JIRA issue can resolve the vast majority of people's use cases.

Tuesday, December 02, 2014

JIRA - Old School Greenhopper and Milestone Management with Versions

I wouldn't be surprised if there are still a lot of organizations that are still using the Classic Boards in JIRA Agile to manage their medium to long term plans in JIRA.  I want to just go over really quickly how one can manage these in the Classic Board before I talk about a much more complex scenario in a future post and I also want to go over the limitations of this approach.



First, you'd establish Versions in your Project that represented different Milestones with dates associated with them.  From there, you'd have your actual release Versions that are directly associated with the releases that you're cutting.
With the Classic Planning Board, using the SubVersion feature, you can readily see which Release versions that build up to Milestones that build up to broader Milestones.  You'd associate stories to your release versions and from there, you can track the progress of your Parent Milestone versions.


I wanted to work this particular feature in here as I still find it to be extremely useful today that few people pay attention to.  That is the Merge Version feature.  Essentially, if you already established a Patch release and realized that this guy isn't going to be released until the next Minor version, you just merge the Version into that Minor version.  From a Git Flow perspective, this would be comparable to the idea that you had a Release Branch prepared but never merged it to Master, but rather, just merged changes (if any) back to Develop.


So anyway, the biggest limitation to this approach are the following:

  • FixVersion in JIRA suddenly becomes an overloaded Term.
  • This method is limited for a Single JIRA Project.
Specifically, Atlassian realized that using FixVersion for so many different things, especially sprints, was not going to be a good fit for organizations that require some flexibility.  That is one of the core reasons why the Sprint field is a new core field for JIRA Agile.  Also, when a field starts to mean multiple things, 

Organizations can establish JIRA projects in a variety of ways for a variety of reasons.  However, when you have two projects gearing towards a common milestone, a lot of organizations start getting into the habit of having versions of the same name in multiple projects and identify them as the same milestone.

This can lead to a lot of confusion and inflexibility in how a new project can be established.  Also, you're shoehorning an association between two fields that the system doesn't natively support, you're probably using that field wrong.

Soapbox Time:  It's pretty hilarious how we, as developers of software for others to use, really suck at using the tools as they are originally intended.  Shoehorning functionality that doesn't exist "just because we want it that way" either means that we're just using the wrong tool or we honest don't know how we want our processes to function.  

Anyway, if you are still using Versions in this manner in JIRA Agile, I'd be very interested in what you like or don't like about using this method.  Next time, I'm going to cover the more recent JIRA Agile functionality that's be introduced circa 2011 that provide some more flexibility, but still have a lot of limitations.


Tracking Multiple Sprint Teams with a Common Goal

... or rather, a goal that YOU have that you're depending on other teams to accomplish for you...

So one thing I feel that people using JIRA Agile should get used to is the notion of building and breaking down Agile boards for targeted efforts to allow them to track something very specific and eventually just get rid of the board when they're done with it.

Here's a query:

"Epic Link" in (EPIC-1, EPIC-2, EPIC-3, EPIC-4) OR issue in (EPIC-1, EPIC-2, EPIC-3, EPIC-4) OR labels = myEffortLabel ORDER BY Rank ASC


What this query essentially means is "Give me these Epics, give me the Issues in these Epics, and give me any issue that I happen to label it.


This gives you a quick and hopefully, targeted view of a set of milestones in the form of Epics that you and other teams are driving towards. My personal observation is that the majority of teams pretty much ignore epics and epic links anyway, so establishing this linkage to the actual stories that they are working on should have minimal impact.

At the very least, you can keep an eye on the general progress of a concerted effort.

Just remember to clean up after yourself and delete the agile board when it's all said and done!

Friday, July 11, 2014

New Job, New City... so Let's Talk St. Louis

So how long does it take for one to find apply, interview, accept a job, prepare a house for sale, move, and settle down for a weekend?

About 3 months, apparently.

BTW, Google "days since april 17, 2014" and you get to a nifty app that tells you how many days.

So anyway, I don't usually talk about St. Louis on here as that's what's Yelp's been for.  Aside from our friends in the area, I'd like to take some time to point out some things that I enjoyed in my 10 years there.

La Pizza / Protzel's Deli / Bob's Seafood

Mmmmmm....

These are the 3 establishments that I will miss the most.  Not because of what they offered, but the people who worked there.  Heck, the La Pizza guys convinced the Bob's Seafood guy to let me pick up 60 lbs of live crawfish on a day they were closed.  That's how awesome they are.  I think I'll be hard pressed to find places comparable.  On top of that, these places were within 5 miles of each other, which leads me to my next point:

Regional Convenience

Sure, you could talk about what other cities have that St. Louis doesn't have, but if it were in St. Louis, it was dead simple to get there.  We lived in a spot where we could get to just about anywhere in the region in 30 minutes or less.  Also, 30 minutes typically means ~20 miles away.  Seriously.  The other day the wife was looking for things within a mile away from our place in Chicago and she was like "that's FAR!"

There's no Mitsuwa in St. Louis, but it might as well be because I have no idea how often we're going to get out that far into the Chicago burbs from the city.

Ok, I'm lying.  We have some good friends who live nearby and we'll likely use them as an excuse to go to Mitsuwa at least once a month or so.

Hometown Pride

To be honest, I think just about every single midwest city suffers from some form of inferiority complex.  Chicago included.  St. Louis is definitely not an exception, but the passion for working towards improving the area is infectious.  The atmosphere in the more urban neighborhoods is leaps and bounds more lively and optimistic than they were even a couple years ago.  The Loop, the Grove, Cherokee, even Downtown, which many left for dead multiple times in the past couple of decades, all have some pretty significant work going on that I was pretty gosh darned excited for.  

View from the top of the City Museum
http://ruff-ranch.blogspot.com/2009/07/city-museum.html
STL is one of those areas that is still transitioning from manufacturing to technology and sciences.  They definitely have some big trials coming up on the jobs front but there're enough skilled passionate people who actually have loyalty to the area itself to establish themselves and make it work.  Square, Woot, and Riot presences past or present were as much personal decisions as business decisions.

St. Louis has been good to us and I have nothing but fond memories of the area.  It will forever be the first place I've been an "adult" at and has definitely shaped my outlook on things.  See you around.

Sunday, April 27, 2014

Microservices! Rorschach Reactions.

+Martin Fowler and James Lewis of ThoughtWorks have a really in-depth and thoroughly thought out article talking about Microservices.  There's particularly a nice blurb in it comparing the term to the more generic SOA.

The context of this article is kept at a more technical execution level.  I'd like to bring up some thoughts on the requirements and communication that can be involved.  In reality, this is more of a Rorschach reaction to the pretty awesome images that they had on their article.

Generally, three things a software development team thinks about are:
  • What is my Deliverable?
  • How is it going to change over time?  What are my requirements?
  • Who are impacted of my changes?  How do I coordinate with them?
However, there's another question that, surprisingly, a lot of IT organizations are still extremely immature at handling:
  • What deliverables need to be changed for my needs?  How do I get those changes to happen?
Here's a pic from the article discussing a functionally siloed organization and the corresponding Architecture that tends to occur.

Because of Conway's law, the architecture reflects the org structure, and this is also a representation of how requirements flow.  The UI folks have some user-facing use case and they need a service change from the middleware folks, who in turn, need something of the DBA guys.  One common disadvantage to this is the further away you are from the original requirement, the more you get some odd telephone game situation going where the API isn't exactly what the UI guys are looking for or the DBAs decide to ignore the middleware guy's model design.  This can lead to a lot of churn and rework.




With a cross functional team, you have teams that are more focused on the use-facing use case, rather than entire groups of people who are only focusing on their immediate deliverable.  One excellent point that is also made in the article is that large monolithic architectures can be organized "too many contexts."  The article posits that this tends to be a result of cross-functional teams in a monolithic architecture, but I think this happens regardless of how your teams are organized.  This is a big reason why we can get a lot of spaghetti logic, where a module may be trying to satisfy too many things at once, as opposed to being split into separate modules.

Something that these pictures also accurately depict is that your cross-functional teams would have fewer people of each function focusing on particular business logic and can likely be working on a particular microservice.  Looking at this from a requirements flow perspective, you can potentially get resource constraints (or managers imposing them), which is one of the main reasons why functionally siloed organizations exist to begin with.

This comes back to the questions:
  • What microservices need to be changed for my needs?  How do I get those changes to happen?
Do you make requests to a microservice's backlog to be prioritized?  How do you deal with that team's resource constraints?  How do your requested changes impact other teams consuming that microservice?  These questions aren't specific to microservices, but with more granular teams, this amount of "red tape" may seem daunting to more traditionally managed organizations.

In addition,  with that many more deliverables in the form of microservices, ensuring that the consumers of your deliverables are prepared for the changes you're making.


In the left side for monoliths, consumers of modules have the option to choose which version of the module to include in their process at build/deploy time.  On the right side, the consumers have the option to choose which module to utilize by communicating with the appropriate service at run-time.  Fundementally, there isn't too big a difference in these dependencies, but operationally, in terms of identifying your dependencies and working with your sys admins on deployments, it is pretty different in terms of how you're communicating changes, dependencies, and to whom you're communicating to.

It all boils down to how well your organization can communicate across your teams.  For a small organization, physically talking or chatting on email or a chat service works out.  However, with large geographically distributed teams with potentially different cultures and whatnot, this gets unwieldly very quickly.

This post is getting long so I'm going to cut it here.  I'm not just talking about microservices, I'm talking about any organization that has multiple internal deliverables across teams.  How can a large geographically distributed organization, consisting of multiple teams, meet customer expectations while ensuring smooth operation from build, to test, to deploy?  I'll run through some thoughts on the challenges and solutions that may fit your given culture.


Saturday, April 19, 2014

JIRA Shenanigans - @mention Multiple People

So previously, we talked about how the @mention feature in JIRA is pretty nifty.  However, the current feature only allows you to @mention a specific profile, one at a time.  Therefore, if you want to keep the conversation going in JIRA, all the participants are going to either need to be @mentioned every single time or they can be added to the Watch List.

Ripped off of
Atlassian Blog
If you have the appropriate permissions, you can add specific people to the Watch List yourself.  They may not appreciate it, but if they don't want notifications, they can always take themselves off.  I think this would be particularly interesting in how they improve upon their HipChat JIRA integration, but I digress.

Enter an open request in JIRA to @mention a group.  This makes sense in that JIRA allows one to configure groups of profiles to be identified in various permissions, mapped to project profiles, and notifications.  However, there isn't a view-only permission for groups or a way for a user to resolve the group to list of profiles, so the burden is up to the JIRA Admin to set things up, maintain them, and let others know who the group members are.  There's an open request for this guy as well.

To take things further to allow more flexibility, I submitted a request for @mention to a Project Role, allowing each project to determine which profiles or which groups should get a notification.

Fortunately, there's a pretty dead simple way around this, which is to create a dummy profile whose email address is an email distribution list.



There are definitely quite a few issues/limitations with this approach.  Some of the immediate things that come to mind are:
  • It will be dependent upon your email setup as to who can view the members of the list and who can edit it.  This is all dependent upon your organization.
  • This takes up a slot in you JIRA user license.
  • Unless you actively have a user for this distro in an externally managed system like LDAP or Active Directory, this requires you to create the user in the JIRA User Directory.  This may require some admins to fenangle with their current setup.
So as always, one of the easiest ways to keep up with the features that you want in any of Atlassian's tools is to add yourself to the watch list in Atlassian's own JIRA issues vote on them.  @ mentions can definitely be improved upon but the tool can only be as good as the feedback the developers get from those actively using it.


Sunday, April 13, 2014

JIRA Shenanigans - @mentions are better than Email

One feature that has drastically changed the way we use JIRA among a geographically distributed team is the @mention feature.  This allows one to specifically notify someone: "hey I think this is important you should read this."

We all hate the telephone game and accountability is pretty important.  This is enforced in a strong way from Alice in this Dilbert comic:


The thing is that there are a couple issues with E-Mail.  The main thing is that it only resides in the inbox/outbox of the recipients/sender, preventing any normal person from taking a look at the conversation later on.  You'll get the whole "oh there's a thread floating around I'll forward it to you" scenario.

Especially when an email is just between two people, you run the risk of some valuable communication being lost when both are not around for whatever reason that may occur.

Enter JIRA Comments.  Having the conversation there provides a couple things.  It's a public forum so people tend to be a bit nicer there.  It's query-able, as all Issue fields are, and they persist so long as nobody deleted the issue.

There's this nifty @mention feature that you can use to direct a comment at someone.  This provides a notification to someone (usually in the form of a an email) with the comment.  However, this adds the context of the JIRA issue and the rest of the conversation.  Some annoyances from this feature comes when you're in a thread with multiple people and you want to ensure that everyone gets a notification.  In this situation, Watch Feature can be used to then keep up with the conversation as well as other changes.

After I wrote up all that (and most of the next post) I stumbled upon this blog post by +Dan Radigan that concisely describes using @mentions and Watchers effectively.  In addition, it also has some nifty queries to help people find posts that they were mentioned in as well.  Definitely check it out.

This post is getting long and I want to talk about how @mentions only allow you to notify one person at a time.  Wouldn't it be nice to quickly and easily notify a bunch of people at once?  Well, as of this post, JIRA doesn't support that, but there's a pretty nice workaround.  I'll get to that next time.

Friday, April 11, 2014

Saturday, April 05, 2014

Why Story Points are Cool

Sure.  Dilbert Comic.



So to carry on the conversation where some people are saying that estimates, in general, are a waste of time, I'd like to cover some situations where having Story Points are particularly useful.

Do we agree on the requirements?
You really shouldn't need an excuse to drag product/business end users, qa, and developers to have one last sanity check on some requirements.  Getting everyone to agree, at the very least, what the requirements ARE and taking the next step to cover some technical detail as to what the effort would be.  You definitely don't want "the expert" to do the estimation as this is a method of risk mitigation of "expert getting hit by bus."

Do I have enough ready-to-work Stories in the Backlog?
As a product owner, you need to know whether you have enough stories pokered and agreed upon by the team for the next sprint.  This is dependent on knowing your team's average velocity, whether people are going to work vacation days, etc.

Also, depending on your team's process, a Product Owner can ensure that the backlog consistently has a decent amount of low point stories that the team can incorporate into their own sprint.  This is for teams that have the rule that a developer can bring in a story after the sprint start only if they think they can complete the added story within the same sprint.

When are we done with Planning?
Pretty much exactly the same thing as the previous point.  More importantly, when can you end the planning meeting?



Where Story Points don't help:
One scenario where Story Points may just be superfluous, and that is when a team typically breaks a story down to the point where they are all essentially equivalent levels of effort.  This exercise is particularly useful for teams with extremely short sprint times.  This all depends on what your team wants to accomplish and how they want to accomplish things.  However, if just about all stories are broken down to 3 point stories, you are still running through the exercise of Pokering, but rather than asking the question "what's the level of effort for this story," you're asking "is this story a 3 or not?"

... but don't get too focused on those Points
It's really easy to latch onto some quantitative value and to start drawing conclusions.  People are going as far as to asking "why are we FAILING our COMMITMENT?!"  In the end, Story Points are just a tool to help us answer the following, but not used as the ONLY metric to answer:

  • Are we meeting customer expectations?
  • Is there any action we can take to improve?
This is really hard to drive home for a lot of people and you're likely going to have a couple "electrolyte" conversations but it is truly worth it.

... but what about our commitment?

Either way, I hope this will provide some ideas for Product Owners in particular in using Story Points to help manage not just their own day-to-day work, but also drive home that it's just a tool to aid in efficiency and identify potential areas for improvement.  We should never become slaves to our estimates.

Monday, March 31, 2014

Whatever Happened to Risk when We Went Agile?

There seems to be some sort of rallying cry that Story Points and, in general, estimates, should be gotten rid of entirely from any sort of software development process.  I think this is, generally, a very strong kneejerk reaction to some pains through the development and release process in terms of managing expectations.

Usually said pains come from someone managing expectations in the following manner:

  • A Release is planned with a set number of User Stories define
  • Said user stories are pokered 
  • Project Manager compares the sum of the user stories against the team's velocity and converts that into sprints - let's say they determined that it sums up to about 4 sprints.
  • Project Manager sets a release date approximately 4 sprints from now.
  • Project Manager attempts to hold team accountable to the Release Date.  The word "commitment" seems be the word of choice, lately.
The most glaring issue with this situation is something that doesn't get mentioned a lot in the agile process: accounting for risks when managing expectations.

We've all probably seen various different risk charts that show likelihood vs. impact with mitigation plans associated with each.  These have all but disappeared.  Granted, I'm not saying that a team should get back to long drawn out risk powerpoints, but it's the team's responsibility to bring up these risks.  It's usually the project manager's responsibility to communicate these risks out.
I get the impression that a lot of agile teams have lost sight of actively tracking risk and reflecting it back to project stakeholders.  It's not like they've gone away because you're doing Scrum.  

Here's my unscientifically backed opinion on medium to long term estimates: shoot from the hip estimates in terms of resources and man-hours from an experienced manager, in conjunction with a development leader or two, is just as good as a thoroughly pokered out backlog.  You take a look at it from a high level, identify some risks, and based on what you know from the team, manage the expectations of the stakeholders.  

Some may argue that more vague stories with point values far exceeding your sprint threshold are just as good.  I actually don't have much direct experience in trying that, but it would also be something to think about.  When you get story points that are as high 100 points, to me that's pretty much the same as a shot from the hip.

The last thing you want, however, is to attempt to define all the lower level user stories at the beginning.  You will find yourself spending an arduous amount of time at once to try to estimate something months in advance and opening yourself to the challenges that impact Waterfall.




In the end, it all boils down to plain ole communication, including the appropriate people in the decision making process, and getting buy-in for the final decision.  At the very least, you didn't spend an ungodly amount of time holed up in a room trying to think of every single thing for the next couple of months.

So why poker at all? Well, there are a boat ton of uses where pokering, with Story Points in particular, is very useful.  This post is getting long so I'll talk about how they help smooth out the process from sprint to sprint in the next post.


Saturday, March 22, 2014

JIRA Shenanigans - Attach Screenshot Feature

If there was a feature in JIRA that was both awesome and not-awesome at the same time is the Attach Screenshot feature.  For an organization whose culture, for some reason, has been imbued with pasting screenshots into word documents, and then attaching the word documents in ALL of their tools, the attach screenshot feature can be both a time-saver for both the one reporting the Issue and the one consuming it.
ಠ_ಠ

Of course, one can point you to JIRA Capture, which, I'll admit, is a super awesome tool for testing web applications and contains a slew of features to aid QA in recording their test sessions and providing visual aids to others of what they're observing.  

Convincing an organization to drop more money on a tool is very hard, so let's get back to why the Attach Screenshot feature can be a pain:

Chrome and 64-bit Java + other Java Shenanigans


So straight out of the box, if you like to use Chrome and if you only have 64-bit Java installed, the applet will not work.  If you're not the admin to do the solution I'm going to describe below, you're going to have to do some configuration.

JIRA has historically been a tool directed at more technically minded users, but this specific limitation is extremely tough to get some people to A) find a fix on their own and B) actually be able/willing to fix it. 



Additionally, you can easily get the question:

"Why should I work so hard to get a tool that we PAID for to work the way I want it to?!"

My answer to that is that most people don't realize other comparable tools cost way more without the customization capability, but nobody wants to hear that.  It then isn't surprising that this very same culture will lead people back to the word doc attachments (without even considering that they can attach saved images).  It is quite the endless cycle of violence.

I didn't pay for Linux so it's OK that I have to
spend hours to make it do what I want!

So is JIRA going to fix this?  Indeed they are!  As of this posting the issue is "In Progress" so I'm hoping it shows up in their next minor version.  


Fortunately, we don't have to wait for the next version of JIRA to get this feature.  Atlassian Labs has a plugin in the Marketplace that isn't dependent on any 3rd party installation.  

Better yet, if you have troubles in your organization with making upgrades to existing tools (aka you're not going to make an upgrade to JIRA even if this feature is availble), you can install this plugin on older versions of JIRA.

Ripped off the Marketplace page
Getting a feature that works right away for all users is the best way to get adoption and can aid in quashing that Broken Window affect that may be plaguing your cultural processes.  

While you're at it, definitely check out other features that Atlassian Labs has developed that may go into future versions.  Find the appropriate feature request in JIRA, watch it, and upvote it.  The more you use Atlassian's JIRA as a user, the more you can figure out how it can help your organization.

Monday, March 17, 2014

Random JIRA Shenanigans #1 - Development Panel

I've decided to start putting in random JIRA configuration notes in here that may be of use to some people: JIRA's 6.2 release has a pretty sweet feature called the Development Panel with a lot of great features if you're using Stash and other git tools.
Unfortunately, we're not using any of those tools.

 BUT, we are using Fisheye 3.3.1 and the big bonus with this feature is that it explicitly shows us the branches an issue is being worked on. Technically, because we're using Gitflow, it shoooould be one branch...

Either way there were a couple of finicky things that we had to do that wasn't explicit in the documentation to make existing integrations with Fisheye/Crucible continue to work as expected.

Initial Configuration

  • Fully Trusted Applications Links between JIRA and Fisheye
  • All Users in Fisheye are in JIRA (but not the other way around.)  Fisheye uses JIRA User Directory for Authentication
Final Configuration
  • Edited Application Link between JIRA and Fisheye
    • Disabled Trusted Application Link
    • Enabled 2-Leg OAUTH
      • Impersonation Enabled.
The first finickiness was that I didn't have Trusted Applications disabled in the Application Link.  This prevented the Development Panel to show up to begin with.  A quick hit on Atlassian Answers gave me an answer:

The next thing, however, was a little odd.  I did not have Impersonation Enabled with 2-leg OAUTH.  In hindsight I don't even know why I didn't do it, but when people were attempting to close their code reviews in Crucible, they were seeing this guy:


Fisheye was having an issue getting specific data for an issue in JIRA.  Enabling impersonation did the trick to allow users to transition the appropriately linked issue if so desired.  Specifically, because Fisheye's user base is entirely based off of JIRA's user base, we didn't have any issue with Fisheye utilizing this feature when interacting with JIRA.





Friday, March 07, 2014

Thursday, March 06, 2014

Things We're Exposed to as Children that We Forget for Work

Oh Bert'n Ernie.  I'd like to think that I am Bert most of the time, but I'm sure I'm guilty of being Ernie when describing the rationale for requirements to others.




If you didn't catch Harrison Ford make a reference to this with Glenn Close you need to watch Air Force One NOW.  But seriously.  Expectations management is hard.  Scope creep is a bitch.


This was the best Telephone game video that I could find on Youtube.  It's surprising how often you see everyone in the room take little notes in their note pads to take off to their teams on what happened.  Heck, I've seen this happen when people enter things in a spreadsheet, and then someone else has their own spreadsheet, and they work together to make one AWESOME spreadsheet!

Saturday, March 01, 2014

User Stories - Asking some Questions

If you're looking for me to spell out what specific elements a User Story should have, I'll just point you to test framework called Cucumber where you write "Cukes" to describe the behavior you're testing.  When thinking about how you're going to test something, the User Story becomes pretty apparent.

http://cukes.info/

But rather, when Business Analysts, Developers, Product Owners, whoever is writing user stories, there are a key questions to ask that can drive what your User Story should look like and more importantly, what supplemental information may aid in the development process.

Whose problem are you trying to solve or alleviate?

If this isn't the first question you're asking, you might as well go home.  Now.  You're home?  Step outside.  Walk a block.  Ok.  Go home.  Look at cukes again.  I'm not saying you should be using Cucumber for your test framework, but they have very good ideas.

How can someone validate this?

See the cukes.

Who's Going to Consume Your User Story?  Who participates in the release cycle?  Who are responsible for validating your implementation against the User Story?

To me, these are all the same question.  However, you may get different answers to these questions, depending on who you're asking.  This is something that's very specific to your project and organization.  However, all the people that may be identified as answers to these questions can benefit from reviewing the User Story.  Why is that useful?  How often do you find yourself having to demonstrate a feature to someone to explain to them what the feature is?  How often are THEY then demonstrating it to others to explain to them what the feature is?  Your organization can have a slew of handoffs where a clearly defined User Story can save time and reduce the Telephone Effect.  Here's an overloaded example:

Product Owner -> Developer -> QA -> Training/Documentation -> Support

In a conversation the other day, an IT manager that supports a huge bio-research organization mentioned that the largest challenge towards their adopting Scrum was that [non-developers] think that it's "just for developers."  Without diving any further into the specifics, I feel that this is due, in large part, to some organizations not disseminating User Stories out to non-developers.  In a more Waterfall environment, you may have a very extensive requirements and design document... that nobody reads.

One potential pitfall for a User Story when it's "just for developers" is when someone, in reality, is simply writing a technical task to be done.  With that, I simply point back to Cukes again.

Do you need buy-in for your User Story?

This heavily depends on your organizational culture and how much trust everyone puts into the Product Owner.  However, when seeking feedback, identifying potential gotchas, or simply being able to come up with a better design, answer the following as part of your supporting material can greatly provide food for thought.

  • What's the current behavior?
  • Why is it not optimal / desired at all?
When working towards buy-in for your User Story, it not only helps people feel more valued, but also increases the organization's trust in your being able to identify the overall direction of the product.

With that last sentence, if not to be more efficient, increase organizational cohesion, and increasing clarity in the process, just simply thinking about these questions when writing User Stories can make your life easier when people trust you more.






Sunday, February 02, 2014

User Stories - How they're Useful

All too often, organizations spend a stupid amount of time talking about what information should go into User Stories and how their content should be structured.  They forget to ask the question, what do we want User Stories to do for us?

Typically, people see Users Stories as a tool to answer one question: "What do the developers need to code?"

In a TDD oriented environment and, surprisingly to a lesser extent, when there is a separate QA team, User Stories, people use User Stories to answer the next question: "What needs to be validated to accomplished our goal?"

One question that doesn't get asked, though is "How do we keep the Product Owner accountable for what we produce?"

"That's not what we talked about."

When demo happens, what's demoed may not match what the Product Owner may have in mind.  When this happens, the conversation can quickly devolve into a "well you said..." conversation where there is no record.  This both negatively impacts the team's relationship with the Product Owner and wastes valuable time.

Stories provide a snapshot of requirements at a given sprint.

With a recorded User Story, the team and Product Owner can review the Story, review what was discussed in the story (ideally relatively close to when work started), and move forward.
  • What was ambiguous in the Story?
  • What Questions did we not ask?  What did we not write down?
  • How can we improve on our stories the next time?

By improving the story development process, a team can greatly reduce rework and avoid contentious conversations with the customer.

"Is this behavior by design or is it a bug?"

Far into the future into production support, the User Story can further aid in answering the super contentious question: "Is this behavior by design or is it a bug?"  This is a pretty overloaded question.
  • When was this feature originally developed?  
  • What was the expected behavior at this time?
  • Does the observed software behavior meet this?
  • Do current customer processes meet these?
The longer this question is discussed without any hard answers, the greater the negative impact it can have on the Team/Product Owner relationship.  This question also greatly impacts whether the solution will be an End User behavior change, a bug fix (typically paid for by the Team budget), or a new feature (typically paid for by the Customer's budget).

Without these scenarios in mind, User Story development can easily have a Fire and Forget mentality.  This can lead to a lot of avoidable costs and negative impacts on customer/team relationships.  Next time I'll talk about more questions people should think about when performing User Story Development.

Sunday, January 26, 2014

Cost Beyond Code #2

Some quick feedback on my rant mainly wanted some examples of a costly situation that is a result of poorly managed requirements.  I'm going to use a lot of "probably's" and "likely's" so just bear with me :P

The Project


  • Requirements are maintained in word docs on a shared drive (not Sharepoint) where the filenames are along the lines of "Release - June 2007" that nobody's opened in years.  Essentially, they're practically un-browseable and un-searchable in any decent amount of time.
  • These requirements were never formally reviewed by anyone.  People would show those groups like QA and End User Trainers "how it should work" in a 1 hour meeting.
  • Developers tend to "own" things where one feature set is entirely done by a single guy.  No recorded code reviews.

The Costly Situation


  • Some piece of software has a feature that has been working in production for a couple years.
  • An odd behavior comes up it is keeping an order from being Billable.  For arguments sake, it is a scenario that hasn't come up in an extremely long time and there's no documentation in the IT Ticketing system of how this was resolved the last time.
  • The guy who coded it is no longer with the company.
  • None of the "expert" users know how the software SHOULD react in the given scenario.
Cost #1: Operations
The earlier you have that money in the bank, the earlier you're making more money for you.  If this is visible to the customer, you run the risk of their simply cancelling and going with a competitor.

So let's say this IT ticket rises through the different levels of support and reaches you, the developer.

Question #1 - Have we ever encountered this scenario before?  What did we do last time?
  • If you're pretty immature about your requirements maintenance, chances are you're pretty immature about your bug tracking too.  For me, this is a classic example of a Broken Window phenomenon in a software development project.
Question (set) #2 - Why is this happening? Is it a bug?  Did the requirements cover this when the feature was developed?

There's absolutely no reason for you to look up the requirements.  You're not going to try to wade through 10's or even hundreds of Word docs that were meticulously documented but whoever wrote them didn't really think about how to reference back to them.  If you're not a developer, the one thing left to do is at least dork around in the stage environment to reproduce the issue.

Cost #2: Support Investigation
Even if the issue is reproducible, without the requirements from when this feature was developed, you probably can't answer the question "is it a bug?"  "Why is this happening?"  Without the context of the requirements, reproducing the issue can be a significant challenge.

You're still going to need a developer to look into this.  However, it is extremely likely that in this scenario the first 2 questions aren't even asked so one could argue that this cost in labor isn't even accrued.  :P

OK DEVELOPER SAVES THE DAY.  Not Really.


Ok you're investigating the code to answer Question #2.  You're likely wading through code you're unfamiliar with.  At this point a lot of people will argue that this like ability of the investigator, coding conventions, and quality of code will impact how efficiently this investigation will be accomplished.

Question #3 - What is the context of this code?

Commit messages can help.  Code quality can help.  However you likely don't have any linkage from a commit message to the actual requirement.  That would come in handy, but like we said, that stuff isn't there for you.  So the differentiation between "this is by design" or "this is a bug" is close to impossible.  It's going to take you some time to figure out what's going on in the code, describe it to somebody, and hopefully have a solution on what the user should do to keep the Order moving forward.

Cost #3: Developer Support
What could have been answered from a requirements lookup has now extended to any other full-up bug investigation in reproducing an issue and passing it onto a developer.  You're going to have some back and forth between the developer and others in terms of trying to answer "why is this happening" question.

Question #4 - Is this by design?

You can't tell.  Chances are this will be dependent upon your opinion of the guy who wrote it.  Either way, it's going to be pretty hard to answer this question.  It will be up to some pointy haired person to decide that it is a bug that needs to be resolved right away in the code, that the business needs to update their procedures to be aware of this scenario, or that some behavior change is required to go through some other channels of funding.

Cost #4: Aftermath - Because you haven't come up with what the requirements were at the time the feature was implemented, you can't hold the business accountable for what discussed, reviewed, and implemented.  It is generally easier to get a customer to swallow the "this is by design" story if you have the documentation to back yourself up and simply update their processes.  

Some people will say that some email will be required to accomplish this but that's in lieu of a decent requirements tracking system.  Without this documentation, you're likely going to take up some time to discuss whether it's a bug (money from the IT Bucket) or a new feature that needs to be implemented (money from the Business Bucket).  

Either way, in order to move forward, the relationship between business and IT runs more smoothly if the conversation goes down the path of "ok now we know what the thought was when we implemented this, but we'd still like to submit a change request," vs. "we're going to just have to agree to disagree so what are we going to do about it?" 

There are a lot of pieces that need to be in place to reduce all the costs identified.  However, requirements that are clarified, reviewed, and traceable are key to making all those pieces be in place (and alleviate the cost of that too).

A common reaction of all of this is "Documentation is always out of date when the code is implemented"

This is what User Stories in a Scrum process are meant to alleviate.  I'll talk about this in some User Story post, but I'm already getting requests for describing the costs of getting a Release out after a Release Candidate has been cut so that might just be my next post.

Friday, January 24, 2014

Oh Hey This was in Draft for a Year

I had the opportunity to speak with someone who left the food truck business.  Food trucks are an interesting beast as they are just now becoming a pretty common in the St. Louis area.  Sure, there are some pretty well established hawker stands like that dude with the boombox at Olive and 6th, but the idea of getting your food off the street is still a new experience for a lot of St. Louisans during the workday.

Anyway.  I was looking at my blog stuff and saw that I still had notes for a post here.  I'm too lazy to see about actually putting these things into wordy-form so here they are.

The Idea

Provide a food truck meal with a gourmet twist.  Meals that included a side and a drink started a $7.

How it got Started

A loan for $50,000 from a combination of friends and family

3 partners - 2 ran the truck, 1 ran the business plans, supplies, and red tape.  The lady I talked to was #3 and would sub in whenever one of the other two wasn't available.

Making Money 

Stick with the city - County had way more red tape.  Only ventured there for single-day events like Stl County Parks

Sporting Events?  There were rules about the trucks that made it too much of a pain.

Tried to do catering events.  That seemed to go well but again, that only worked well in warm weather.

Competitors

Brick and Mortar - there is a rule that you can't be so close to a brick and mortar competitor.  That was annoying.
Other Trucks - Sure they tried to organize with each other so that they wouldn't cannibalize each other's business but a lot of times some of the other trucks ignored it.  You'd think you'd have a spot all to yourself and then boom.  Your lunch crowd just got cut in half because another truck showed up.

What happened at the end?

The Partnership ended when one of them just didn't want to bother anymore.  Business plan lady actually had a full time job so she didn't have time and they didn't feel like bringing in a new partner.

I personally thought the food was pretty decent and nowadays, lunch for $7 without a drink is pretty reasonable.  I really appreciated her speaking with me.

Costs of Software Beyond Code

So I might as well end my blog posting drought with a stupid gripe.

The software development cycle is but a small subset of the actual software product release cycle.

Sure, a lot of people are talking about Agile processes and getting a new release every sprint and whatnot but cutting a release for most software development teams means the following:
  • We have some features developed!
  • It's passed some sort of regression testing!
Especially in larger organizations, this is far from what's required to get a release out the door.

I work for a company where the developed software typically has 3 types of consumers:
  • Developers within and outside the company
  • A couple hundred marketing/accounting/operations folk who are in the same physical location as us
  • Thousands of marketing/accounting/operations folk that are scattered around the country
Oh by the way, bullet two govern rules for bullet 3.

Coming from a large "systems of systems" integrator-esque company, it comes as a surprise to me how little care people take for delivering to another developer team.  Of course, they care when they're at the butt-end of the stick.

However, what's even more surprising is when a team isn't aware of all the work involved once a release has passed QA.

Especially for those thousands of folk scattered around the country, there's a boat load of preparation going on.  Training material, help documentation, videos, webinars, and conference demos are done to ensure that major features or even changes to existing features are flowed out and can be referenced in the future.

... and THEN there's the support and any issue investigation in production.  

The amount of personnel devoted to this is practically the same headcount as the development teams.

The absolute worst thing any team can do is simply provide these people some new features and say good luck!  No release notes.  No Requirements that fed into this.  Nothing.  Else.  Awesome.  You just tripled the cost of all the post-development work.  You probably tripled the cost of QA since you probably did the same to them as well.

I am of the firm belief that solid requirements that are reviewed early on by everyone in the release cycle is the key to efficiency in any software project.  The key words being reviewed and everyone.  It's pretty much everyone's responsibility to ensure that this happens as early as possible.  Unfortunately, what happens a lot is that everyone pretty much throws their arms in the air saying that it isn't their job and a combination of snowball and broken window effect happens.

I'll try to talk about a Shangri-la scenario in a Scrum perspective and also talk about what the overall deliverable set should be.

When I get around to it.