Tuesday, January 26, 2021

Best options for sharing state between components with React

I'm building a new web app in React, and so far I've been building everything with functional components and the useState and useEffect hooks. This is so much better than building with class based Components!  This worked fine at first but my application is getting a little complex and I now need to share state between components at different levels.

There are many different ways that you can do this now with React. I evaluated several different solutions and decided to write up explanations of the different approaches, with some code examples, and my rationale for why I chose or did not choose that solution.

useState is so simple and flexible, and easy to understand, so I'd like to find an approach that's as close to useState as possible. Ideally just a single line in a component to get the current value and be able to update the value. Local state is really simple and easy to understand, why shouldn't state shared across a few components be so simple?

After my analysis, I found some libraries that almost did what I want, and showed me the power of custom hooks. So I decided to write my own small library, make-shared-state-hook, to make shared state easy. It generates a custom hook for you that you can put in your components with one line, that returns the current value and a setter function just like useState! But don't take my word for how nice this library is. Please read the different options that I evaluated to understand them better so you can make an informed decision yourself for what approach to use.

Options considered and evaluated 


Option 1 - Prop drilling

Prop drilling is a simple approach where you create local state value at a high level component, and pass it down as a prop to children component that need it. This approach is great for simple applications - it's very explicit and there are no other libraries or magic involved - you see exactly where the data gets created and where it's used. I highly recommend prop drilling if you only need to pass state between 1 or 2 levels of components.

But, like my application that I am building, most applications become more complex and prop drilling becomes unwieldy for some situations. If you rely on prop drilling you'll find yourself passing props through several levels that don't actually need it themselves which is messy. And your list of props can become huge in the higher level components as they need to have everything that all of their children will need as an explicit prop. This will cause different developers working in different areas to trip over each other.  See the following code example:

As you can see here - CounterDisplay and SecondCounterDisplay don't actually need counter - yet they have to have it as a prop and pass it to the next component. And this is for just one piece of data - imagine how much more unwieldy this will become as more data is added.

Verdict - Prop drilling is great for just one level, but a poor choice for anything more than one level of passing. So it's definitely not an option for data that needs to be used in different parts of a complex application.

Caveat - I put this first because everyone should understand this approach and use it where possible. Locally creating state with useState, and passing it as a prop to a single level deep child component is definitely the simplest approach, and simple is good. If your components look fine with this approach, use it!  But... don't design your components specifically so you can pass props around. When you hit a point where you have components passing their own props to children and not actually using those props - that's when you need to move away from this approach.

Option 2 - React's context with providers

Context is typically talked about as the alternative to Redux if you really need to share state. It's designed for application wide settings like locale, logged in user, etc. To use context - you create a context object or value to be used at a high level location, and then wrap the highest level of your application that contains all components that need the context data with a provider component.

This approach is designed to make it easy to read the value many places, but setting the value needs to happen at the highest level component that has the provider wrapper by default. So this only makes sense for a few types of data. However, there is a way that you can get around this by making one of the pieces of data in context a function to set the state value. Within the provider, use useState to store the values. Here's a simple but complete example, partially based off of an approach from the React core team.

There's a lot to like about this approach. It's pretty easy to understand when looking at a component. It keeps data separate and isolated. However, I don't like needing to wrap the highest level containing all of the components needing the context in the provider. This seems like it would get unwieldy as you add more different types of shared state. I also don't like the amount of work required to add a new piece of shared state - you must add the provider, and then traverse up and edit a higher level component to wrap it with the provider.

Verdict - I feel this approach would get cumbersome as you add more than a few different pieces of shared data. And this would lead to you putting different pieces of shared state in to a single object (because that would be easier than making a new provider and wrapping another level in something new), making it unnecessarily complex and inefficient. I'd be much happier with this approach than Redux, but, as you're about to see, I feel there are better ways to share state.

 

Option 3 - use-global-hook

While searching for information, I found a really good blog article, State Management with React Hooks — No Redux or Context API, that explained how you could easily write a custom hook to share state. The blog author wrote a library, use-global-hook, that can easily be put in to your application.You simply call the globalHook function once in a shared state file, passing in an initial value and an object with functions to update the data, and then you can use this in components similar to useState! You can put this in as many components as you want and it will work great - any component can update and all components will get the update. Here's a complete example where I'll show how to use it with two separate pieces of data and multiple components that read and write each piece of data:

Verdict - This approach has almost everything I want! But there are a few things I don't like. Having a separate actions object that you have to pass in seems like an unnecessary abstraction, why not just expose the function to set the data, like the blog post shows an earlier version of this library did? You also cannot use simple types, you have to use objects. This is not like useState - I much prefer the simplicity of useState where you can pass anything to save as state. The way it's written seems to encourage a single global state which I am very much opposed to - why keep all state together when you can keep them separate? You can pretty easily have separate pieces of state by calling useGlobal in separate places, but forcing the data to be an object makes this seem counterintuitive.

Option 4 - make-shared-state-hook

The blog post in option 3 and use-global-hook are really close to what I want, but I really want something that works just like useState. From the blog post on use-global-hook, I saw just how easy it is to use custom hooks to accomplish this.  So I decided to create my own library to do this, make-shared-state-hook. Here is an example:

Create each piece of shared state:

And here's a full example using that:

Now this is what I want - something just like useState that I can use anywhere in the application in any number of components, that all come back to the same data. It meets all of my goals!

There is another library that does almost the exact same thing, react-shared-state-maker. I discovered this as I was publishing make-shared-state-hook.  Great minds must think alike I guess?  I still decided to make make-shared-state-hook, because I wanted to make sure the library has no dependencies. make-shared-state-hook only relies on your application having at least React 16.8 as a peer dependency, no other libraries are needed!

Verdict - This approach has everything I want and meets all of my goals so I am using this library. The simplicity and power of hooks is amazing and has has brought back my love of React after suffering through many years of Redux (see more about why I am not using Redux in a later option)

Caveat - just because this is simple and easy doesn't mean you should use it everywhere. Only make state shared between components when you really need it to be. Keep as much in local state with useState as possible and your application will be simpler, less buggy, and easier to maintain. This approach does make it so that when you need to share state, it's almost as easy as using local state. 

Options not seriously considered

 

Option 5 - MobX

I took a quick look at MobX. I think MobX could have some good applications if you don't have React everywhere, and if you have some extremely complex use cases for sharing your data. But in my opinion, it feels too complex for just about every use case. And I don't like breaking away from React for having shared state. So I didn't do a detailed analysis.

Option 6 - Redux

Redux is always an option. It's been around since 2015 and has been battle tested. Because of my experience with it, I am taking Redux off the table. My prior company starting using Redux back in 2015 and built most UI applications with it for 5+ years, and some of the biggest applications were worked on by dozens of different teams over those years. So I got to see first hand (as both a manager managing teams building applications with it and a coder building simple and complex applications with it) just how bad some of the problems with Redux are.

The biggest issue I have with Redux is that it's so hard to wrap your head around what is happening when you're looking at the component code. And if you use react-redux with connect, that makes it even harder to grasp what's happening. Let's say you're looking at the code for a low level component. You want to know where the data for the props comes from. So you have to look at the higher order component which is usually a separate file. Then look at a mapStateToProps function to see what state values it pulls. Then look at the reducer (typically a different file) to see what the state actually looks like. Then look for dispatches across the application that have an action (yet another thing to look at) that the reducer is looking for to see where data is being set. And all of these are typically in different files. Except it all comes back to a single place where you have to tell redux each reducer, and all the data is stored together in a single object. By this point you've probably forgotten the details of the component because you've had to look at code in at least 6 different files, each file with a different concept to wrap your head around. You can't exactly right click and tell your editor/IDE to find usages of the piece of state.

Redux fails at many other things I'm looking for - it takes a lot of code to make a new piece of shared data and use it (compare that to the 2 lines needed for make-shared-state-hook), your reducer code is burdened with having to account for all of the other state being there, bugs can easily slip in when state data changes and it's hard to find what's using it... I could go on for a long time about why not to use Redux. Now I'm sure there may be a few cases where you need a little more than what make-shared-state-hook offers, but, I'm not sure Redux is the answer.

Option 6a - React's useReducer with context

As much as I love how React added hooks in 16.8, I was really baffled by them adding useReducer and dispatch. To me, this is not really an option - you get most of the problems and complications of Redux, but you'll encounter some more problems that Redux had solved long ago.  

About the Author

I'm Brent Sowers, a principal software engineer at BlackSky. I've been writing code and managing software teams working with many different langues and frameworks for 20 years. Check out the links on the top right to find out more about me. And come work with me at BlackSky!

Thursday, October 10, 2019

Allowing developers to fully own quality

As an engineering manager of two sprint teams that had dedicated quality engineers who do all testing and certification (reporting in to a different quality engineering manager), I recently tried something different - I took the quality engineers out of the two teams and placed quality entirely in the hands of the software engineers. 7 months later, the teams are running great! Levels of quality have been maintained, output of the team has stayed the same, amount of automated test code written has by the team has increased, and motivation and engagement on the team has increased. I'd like to share our story of how and why this worked.

Some of you may read that first paragraph and think "Dedicated quality engineers on a team? What do you mean?" or "Quality wasn't fully in the hands of developers before?".  I would have thought the same thing before I joined a company that ran this way. If this is what you're thinking - this post is not for you. You won't be surprised by anything posted here. This blog post is for people on sprint teams where there are dedicated quality engineers who test and certify all work.
 

Prior process


So let me explain how things had previously been.  We do product development in sprint teams. Within the teams, there are typically 4 to 7 software engineers, and 2 to 3 quality engineers.  Within a sprint (typically two weeks), the software engineers write code for user stories, deploy the code to a testing region, and then hand the user story off to a quality engineer to test and certify. The quality engineer will test it, try to break it and find issues (typically by manual testing), perform regression testing (mostly manual due to lack of thorough automation), file bugs for the developer to fix, and back and forth until they have tested everything and any bugs were fixed. Then they certify the user story.  Automated tests are sometimes written by quality engineers in the sprint, sometimes written in later sprints, and sometimes not written at all. Software engineers will typically write their own suite of API level tests for user stories that have API end points, but not always. The work is mostly done on applications that have lived for years, and do not have very reliable automated tests.

New process


The new process that my two sprint teams are following is similar, but all of the work is done by software engineers. These two teams now just 4 or 5 software engineers, and no quality engineers. The software engineers write code for their user stories including all necessary automated tests - unit, API, UI, whatever makes sense. They will then deploy the code to a testing region, and another software engineer (whoever has the most availability that day) will then do any necessary manual testing to certify the user story, similar to what quality engineers would do before.  So... really not too different than before, just without the quality engineers.  There's a little more to it than this around how we plan out what needs to be manually tested, but, I won't go in to all of those details.

My conclusions

After doing this and seeing the great results, I am really convinced that having quality engineers on sprint teams that automate, test, and certify developers code just does not make sense in the vast majority of cases. This is a relic of waterfall, but, doesn't make sense with scrum. There may be other productive roles for quality engineers, possibly even on a sprint team, but that role should not be to test and certify all of the work coming from developers.

So, why is that my conclusion? And what do I have to share for how to execute a change like this?  You can read the why, or skip straight to my recommendations for how to make this work.

Why does having developers test and certify make sense?


Putting quality entirely in the hands of the software engineers leads to many positive benefits for developers, and brings much better efficiency on to the team. Here are many positive factors why putting quality entirely in the hands of software engineers in sprints makes sense.

Better overall team cohesion

 

When everyone on the team has a similar role of a developer, there is a better sense of cohesion on the team. Here are some factors why the team cohesion is better:

Removing the wall between the coder and tester for work

 

When there are two different groups, one group responsible for developing and another for testing, a wall exists. There is a "toss it over the wall" mentality when tasks move in to test. Developers push code out without much consideration of testers time and availability, and consider their part done when they've tossed it over the wall. Developers know testers are there and will catch their bugs, and don't consider enough that testers are not perfect and don't consider the time and rework to fix bugs. Removing that and having developers test and certify each others work as regular practice removes this wall, because the people they are giving their work to to test are also other developers that have their own development work to do.

Unified direction for all engineers on the team

 

When all engineers on the team report in to the same group, there is a more unified direction for the team. There are no longer two different groups on the team with competing priorities from their management. Tough choices that teams need to make, like sticking to a definition of done and rolling over user stories when automation is not done, are easier for the team to make and the manager to assist with.  Compromises/sacrifices that affect quality and/or automation are easier to make with everyone on the same group.

Better shared understanding of the work being done

 

Developers who write application code all have some base technical skills and understandings that quality engineers will not necessarily have - especially in the active area that the team is working on. This means that discussions among developers over details of user stories tend to go smoother and quicker, and less explanations are needed. This will be discussed more in the section on efficiency gains.

More sharing of technical details of implementation and testing strategies

 

Sharing of technical designs with the whole team, and test plans with the whole team, leads to better designs and better test plans, and enables the developer to understand the testing strategy for their tasks. This can still be done with dedicated quality engineers on the team, but there is much less friction when the sharing is done between software engineers - there is a base technical knowledge that software engineers will all have from building the application code that quality engineers who do not build application code will not have.


More efficiency

 

When developers do testing and certification, there are many efficiency gains.

Reduced time spent manual testing

 

The data was very clear that throughout the 7 months, there was dramatically less time spent performing manual testing on both teams compared to before when there were quality engineers, yet the same level (or better) or quality was maintained. This is a huge efficiency gain. I feel this is because developers have a better base understanding of the technical details of the product and are better able to streamline their manual testing to only what is necessary. A common complaint from the software engineers from when quality engineers were on the team, was that quality engineers were testing areas that were not necessary, and developers had to spend time explaining why it's not necessary, or why bugs filed weren't relevant. This inefficiency was really removed with developers doing the testing/certification.

Less time explaining technical details

 

A common complaint from developers with dedicated quality engineers who do not write application code is having to explain technical details of work to quality engineers. But when the testers are other developers, there is much less time needed to explain technical details of their work, since the other developers are also writing application code and have a base technical understanding of the code and product. Less time explaining = more efficiency. A counter argument to this is that the separation is better to ensure better quality, but we did not see this as an issue.

Better utilization of everyone's time

 

Teams are rarely able to continually deliver functionality to testing throughout the sprint at regular intervals. Work tends to get delivered to test later in the sprint. If there are dedicated testers on the team, this leaves time early in the sprint where these testers are not fully utilized. Sometimes testers will work on automation backlog in this time, but when certifying developers work is their top priority it's hard to focus on and be effective at this automation backlog, since developers work can drop on them at any time. With developers certifying each others work, they are fully occupied with their own application code development work in this early sprint low testing time.

More flexibility in sprints

 

When anyone on the team can test and certify a user story, there is much more flexibility. Several big user stories getting delivered to test late in the sprint is no longer as big of an issue since there are many more people who can test. A single tester taking time off., whether planned or unplanned, no longer causes big disruptions.


More and better automated tests

 

With developers writing automation, more gets written overall, and what is written is more strategic and more efficient to write.

Delivering automation in sprint with application code

 

Having a separate group work on automation after the application coding is complete makes it very difficult to deliver automation for new work in sprint. There is just not enough time at the end of the sprint for this. So, either the automation for work is rolled over as a separate user story, or automation is abandoned for the new work, neither of which are good. With developers writing automation for their own work, it's a lot easier to deliver automation in sprint. Developers are able to write this automation along side the application code, and can modify the application code as needed themselves to support automation instead of having to coordinate with others. This also helps developers write better application code. There is much less overhead and inefficiencies having automation be completed with application functionality in the same sprint.

Better and more strategic automation

 

Developers gain more skills by writing both API and UI automation themselves, and think more about how the features they are developing can be written to allow easier test automation. Better and more strategic automated tests get written that have higher value because of the increased collaboration and discussion among developers of what automation will be written, and developers reviewing each others automation code. Since automated tests are software, and pretty complex and hard software at that, developers think of different and innovative ways to test that may not be thought of if this is purely the responsibility of quality engineers who do not write application code.

Less overlap between different levels of automation

 

With the developer writing all automation for their user story - unit, integration, and end to end/UI, there shouldn't be much overlap in the test code. Integration will test what unit cannot, and UI will test the end to end which can't be tested through integration. The developer will also follow testing best practices and put as much in the unit test level as they can - if integration, or UI, tests aren't needed, it won't get written!  This is more efficient than when there is a separate quality engineer writing this automation - there will inevitably be overlap between this automation with two different people. If you have a really good quality engineer automating a lot - they may be going overboard and overlapping with unit tests, or putting a lot in slower integration or UI tests that should be in unit tests.


Increased ownership and engagement from developers

 

With developers assuming testing and certification responsibilities, they will feel more ownership of what they are building, which leads to higher motivation and engagement.

Developers feeling more ownership of quality

 

When quality engineers own quality, developers naturally do not consider quality as much as they should. Once developers are testing and certifying others functionality, they start feeling more ownership of quality in their own work, because they get to experience testing and certification for others work. So, when developers push things to in test, there are less issues overall that testers find because the developers had been thinking about quality all along - both while coding, and in their local testing. This leads to less overhead of development issues being filed, and higher overall quality.

Developers having more knowledge of what the entire team is working on

 

By testing/certifying others work, developers get directly exposed to, on average, twice as much functionality the team is building than they would if they were "just" developing user stories. Because a developer can test anyone else's code, they're going to want to know and understand everything the rest of the team is doing. There are many side benefits to this - developers feel more engaged on the team, are able to help more with others work, provide more input/suggestions for designs and test plans, and for newer developers, get to learn a lot more about the product. All of these factors lead to a healthier and more motivated team.




Recommendations for best chance of success


I'd like to share some lessons we learned on the teams, and practices which I felt ensured that this was a successful change.

Maintain strict separation of responsibilities between the coder and tester of each piece of work

 

There are endless benefits to having a different person certify a developers work, and endless downsides to not having someone else take a look at a developers work. With developers testing and certifying it could be tempting to have a developer certify their own work. Do not allow this to happen unless the whole team discusses as a group and agrees that no certification beyond automation passing is needed. Quality will definitely suffer and many of the gains from having developers test and certify will not happen if developers are certifying their own work

Review test plans with whole team prior to development starting

 

Ensure that test plans are written prior to development starting on a task, and that the whole team reviews these plans. This ensures that the best and most efficient test plans are made, and allows the whole team to feel ownership. This also helps ensure flexibility for who tests a story - if the whole team participated in the test plan review, the whole team will have some knowledge on the work. And it ensures that the developer knows what the test plan is - this will catch potential bugs and rework before they even start coding!

Anyone on the team should be able to test work (other than the developer for their own work)

 

When anyone on the team can test and certify a user story, work is a lot easier to get to done. In standup - whoever is available can test something rather than waiting for a single person to be available. This flexibility opens up many possibilities that are hard when there are a limited number of testers - several large and small user stories can come in to test at the end of the sprint and still be able to be tested.


All developers should spend a similar amount of time on testing/certification

 

It would be very natural to have the developers that know the product the best do the bulk of the testing and certification. And it would be natural to allow the developers who are more interested in testing, or more willing to do the testing work that no one else wants to do, to take the bulk of the testing. Don't let this happen. Many of the gains will not be realized if only certain people are doing the bulk of the testing. This may require a heavy handed, top down approach from leadership on the team to force the work to be distributed - but this must be done or else you'll lose many of the efficiency gains. Not to mention, you don't want any individual devs spending too much time doing manual testing, or they'll slowly transition to a quality engineer as their primary function. Having everyone do testing, even those who don't know the product too well (provided they pair up with or get guidance from someone who knows the product better), or don't want to, is a great way to spread knowledge among the team!

Developers should discuss and plan what automated tests should get written for each story

 

Developers will have different ideas on what automation should be - what level of depth does each type do, and sometimes even whether automation makes sense.  This should be worked out for each story/task prior to development starting, so the whole team (including manager) can settle on what makes sense. I'd recommend doing this as part of technical design.

Leadership needs to be a strong voice of quality on the team

 

Some developers will adapt very well to these new responsibilities, but some will not and will need help to fully embrace the quality role. The rest of the team needs to be voices of quality here, but in particular, lead developers and the manager need to be strong voices of quality on the team.  Stick to principles - make sure test plans are sufficient in test plan review, make sure valuable automation gets written for all developers work on the team. While the whole team owns quality - leadership needs to ensure that enough quality aspects are considered in test plans and automation plans/execution. Some teams might be able to do this without a top heavy approach, but at the end of the day the leadership needs to ensure quality is adequately being planned for.

Consider edge cases and regression scenarios in test plans

 

Developers most likely will not have trouble adapting to testing the main scenarios for others work - after all developers have always tested this in their own work. However they may not have much experience thinking through possible edge cases or regression scenarios. Ensure these are included in test plans - make sure the whole team is throwing ideas out in test case review for edge cases and regression areas to cover. If the test plan writer, or the entire dev team, doesn't know enough to be able to determine areas to test for regression issues - seek out those who do (devs on other teams, PMs, app support, etc) before assuming regression testing areas are not necessary.

Be smart on the level of testing done

 

You want to ensure that the team maintains high quality - but be smart and efficient about the level of manual testing done. Avoid duplicate testing of areas within the sprint by grouping testing together where it makes sense.  The team should come up with a strategy for repetitive testing (like cross browser testing, and localized content testing) that balances quality with efficiency. This repetitive testing is probably not necessary for every individual user story.

Involve product management to look at user stories

 

It helps to have product management take a look at work during the sprint. They are another set of eyes that can catch overall scenarios, and look and feel issues that may get missed. We didn't have a strict process around this but it might make sense to put some process around this. Regardless of whether devs or quality engineers test I think this is a good idea to ensure that what the team is building is what product management envisions.


 Invest in good automated tests

 

Good automation - that has long term value to ensure quality years later, does not have intermittent failures, and can be a source of learning for future team members, takes time to make. Make this investment. Developers who have not written much API and/or UI automation may not automatically embrace this mindset and may write quick and "just good enough" automation. As a team ensure that the right level of automation is written for all work. This will most likely lead to some tough choices - having to roll over stories because not enough automation was written, adding prerequisite work to get some automation foundations in place, or estimates for some user stories to go up. These tough choices should be made though and not shied away from. The earlier you make the investment and stick to it, the more automatic this will become for all team members.

Learn from quality issues

 

If there are quality issues on the team - look at it as a huge learning opportunity for everyone on the team. Ensure that root causes are analyzed - what could have prevented the escaped issues? Are there safeguards that should be there? Is there enough sharing of test plans? Is the full team engaged in test plan review? Everyone on the team should participate in this, including product management, scrum master, and dev manager. Since developers are doing testing and certification, they will be more likely to participate in the discussion than they would be if there is a separate testing group.

Utilize standup to plan for the day to minimize context switching

 

Developers having to test others work adds more things that developers need to do. To minimize context switching, it's best to plan the day out in standup - have the team talk through who will test what. Unless it's the last day or two of the sprint, or there is an urgent production issue to get tested, don't have someone drop what they're doing the moment that a user story goes in to test.

Allow the team time to adapt to these new responsibilities

 

Don't go in from day 1 of a big change like this assuming that velocity and quality will be the same. It will take some time for the team to adapt - every team will be different for how long they need. So for the first few sprints, be conservative in sprint commitments and pull more work in if the testing goes faster than expected and the team adapts quickly.

Sunday, September 9, 2018

Responsibilities of a lead developer

Over the years I've been a lead developer on different types of teams for different types of companies and environments.  The past few years, I've been lucky enough to be able to manage and mentor some great developers to become leads themselves.  At first, I had assumed that everyone would know what being a lead is about, and what their responsibilities are since they had been on teams with leads their whole career.  However, this was definitely not the case.  Every developer came in to the lead role with different ideas of what their responsibilities should be. Despite me discussing the responsibilities with them, they were really surprised at just how many things a lead is responsible for. In order to help set the expectations with developers over what I think a lead's responsibilities are, I put some effort in to documenting what I view as the responsibilities of a lead developer and I'd like to share that with everyone.

This list is based on my experiences of what I've seen work and not work on teams.  It also reflects the types of companies I've worked for - growing companies with frequent new hires, many less experienced developers (including many right out of school), frequently changing projects and priorities, and in general a decent amount of chaos.  If you're at a stable company on a team of very good and experienced developers working on a long running project, and not a lot of chaos, a lead developer shouldn't have as many requirements as I describe here.  But, I don't think that is the norm in our industry.  In the types of companies I've been at, a strong lead developer that takes responsibility of the output of the team is critical for the success of a team.  I have yet to see a team in this type of environment excel if the team tries to distribute most leadership responsibility to all team members and doesn't have a strong lead.

Also, to clarify, "lead" by my definition is not a manager.  For this post, a lead is not the one that is handling performance reviews, administrative issues, and overall career development for the software developers. Many times it can make sense to have the lead developer also be the manager of the developers on the team, but that just means that the person has some additional responsibilities.  They still have the full lead responsibilities too.  And a lead by my definition does not typically handle overall project planning and coordination between teams on multi team projects. The lead should have input here, but, taking those responsibilities on is a little too much for a lead in my opinion. The lead should be focusing on their individual team.

My list is NOT comprehensive, the responsibilities as a lead developer are going to vary and change all of the time, but all come back to being the leader of the team more so than anyone else on the team.  With being the leader comes accountability for the team.  A good lead developer must be an accountable leader. Read http://www.brandonhall.com/blogs/the-buck-stops-here-a-culture-of-accountability-drives-effective-leadership/ for a really quick summary of what that means. The lead (along with the manager) should be held accountable for the output/results of your team - both the good things coming out of the team and the bad things.  "The team" means the entire team - devs, testers, product managers, scrum masters - not just developers.

So here is my list of lead developer responsibilities. For each responsibility I will give some examples of how you can do this.

Ensure high quality work is coming out of team

High quality means that the work that is being produced follows all best practices, and has minimal to no bugs. There is a lot that you can do to ensure this:
  • Full and good code reviews are being done by you - you are the one who needs to be approving code on the team and enforcing this. If your team does not do code reviews, start!  Others on the team should help in the code reviews but you are the point of responsibility. Many code review comments by you should lead to changes by the developer, but, be sure to allow the developer space to reply if he/she disagrees with your suggestions, and be open yourself to backing down on suggestions for change. Hopefully you will learn some from the developers in code reviews as well.  This should occupy a good portion of your time. Don't just look for styling/syntax.  You should be able to fully explain to someone else what the code is doing, how it's doing it, and why it's doing it that way.
    Ensure proper test code is being added while functionality is being developed, not as follow on work after the functionality is delivered. 
  • Ensure developers are doing the proper amount of local testing - sit with them (meaning actually sit by them and have them go through the local testing with you, or if remote, do a video call with screen sharing) and work with them to see this. When you see bugs introduced by developers, sit with them and discuss it to see how the developer can improve to reduce bugs in the future.
  • If a developer continually produces code that does not follow best practices, sit with them and work with them to identify why this is, and what can be done to help them to produce better code in the future.

Ensure developers are at maximum productivity and that the organization is getting maximum value out of them

Most of us work for businesses, and businesses main objective is to make money.  You as a lead play a key part in this so you must be doing what you can to ensure the developers are realizing their potential and are getting as much done as they can (WITHOUT working long hours). Many leads struggle with this because they do not want to have difficult conversations and do not want to ever say anything negative to people, but this is important as a lead.  Managers should be helping with the difficult conversations, but all responsibility for this cannot be deferred to the manager.
  • Make sure each developer is producing enough - if you feel they are not you need to work with them to ensure they realize the expectations, and see if there are blockers/issues that you can help with. Take their experience/background in to consideration for this.
  • Make sure work is planned out well enough to reduce conflict/overlap with others on the team and outside teams. This can be really hard to do, but will have a huge impact on output of the team.
  • Make sure that there is always work lined up for each developer - my guidance is every developer should have at least a full week's worth of concrete work lined up for them and that they need to know what this work is. This will increase their priority on getting work done - if they do not have anything lined up beyond their current work, they will take longer on their current work (I wish this wasn't true but in my experience it is true for most developers). Be explicit on this and don't assume developers will pick up random unassigned work. Anticipate when they are going to run out of work and get ahead of it.  Depending on the environment they can be the ones to pick the specific work, but you must ensure that what they are working on next is figured out. If you have good developers, this is going to be a real challenge, keeping up with them should be difficult!
  • If there are things the developer continually does that slows them down, sit down with them to understand their process, see if you can suggest a better way of doing it, or plan out tooling/framework changes to help make things more efficient

Mentoring and coaching developers to grow

You work with them more than their manager (unless you are their manager), so you are in more of a position to mentor developers than their manager is.  Any way you have of suggesting different/better ways for them of doing their work, sit down individually with them and discuss it. Here are some ways you can accomplish this:

  • Ensure communication is open and that the developer understands what you mean in your communication.  If you frequently get head nods, and "yeah"'s, this might mean people don't understand what you mean and you need to find another way to phrase what you are trying to say.
  • Ensure that every developer has challenging work, depending on their skill level and experience. Watch over time to ensure that they are not continually given non-challenging work.
  • Ensure that developers don't get pigeon holed in to always being the one to do a specific type of work. If one developer always takes a specific type of work, get this work spread out to others on the team. This might mean you have to jump in to a conversation and say something like "Joe, I appreciate you wanting to take this on, but I'd like to get more knowledge of this spread out among the team. Sarah, can you work on this instead?"
  • Make developers uncomfortable. Get them outside of their comfort zone on occasion.


Help developers - unblock them when they encounter difficult circumstances

Don't wait for a developer to reach out to you - if a developer has not made progress on a task for a few days, it might be best to talk to them individually to see why and what can be done.  In general you cannot rely on developers, or for that matter anyone, to ask for help themselves. Most people don't like to ask for help and like to try to figure out things themselves, but in a team setting, people always days figuring things out on their own is not always best for the team.  Granted, the opposite is not good either - if developers are immediately asking for help for others, you'll need to sit down and talk to them and get them to spend some time trying to figure things out on their own first.

If you don't know how to help a developer that is stuck or in a difficult spot, reach out to others, or get the developer to reach out. First share on the team. Then escalate it up - reach out to other leads, other developers in the company, etc.

Ensure work has had the proper amount of design done, and that the design supports long term usability, scalability, future maintenance

When teams don't have a system to share out to do technical designs on work and share them with each other, I've seen two things can happen.  First, if the developer doing the work is good with designs, they'll do decent designs but deprive the rest of the team an opportunity to suggest better ways, and deprive others on the team a chance to learn.  Or, the design just doesn't happen. This leads to systems and code being produced that are not optimal, and will definitely lead to rework.  And will definitely deprive the team of chances to learn.

As the lead developer, you don't have to be the one to do all of the designs yourself, but, you have to ensure that the designs are being done and shared with the team, and need to ensure that the designs are good. Work being done should be designed to properly meet future needs, be able to scale to meet the expected usage needs (and a little more), and not require high amounts of maintenance in the future.  That can be really hard to do in many organizations, because this will typically mean things will need a little more time to develop right now. But that additional time will pay off very quickly.

I feel that the lead should be doing the most difficult designs on a team. If you don't have the technical skills to do this, and you are not brand new to the team/organization, then, maybe you shouldn't be a lead.  When the design decisions are left to the loudest one on the team, or deferred to architects who aren't part of the day to day of the team, the results will not be good. So you as the lead are responsible for this.

Having the technical skills to do good designs also allows you to help better shape designs that others on the team do.  Much of technical designs are opinions and there is no clear right or wrong answer.  So when disagreements arise of how things should be designed, someone has to make the call over which path to take.  This should be you making this call.  Depending on how your company is structured you may also have architects that will do this, but, I have not worked for organizations that have enough architects to go around to do this for every team.  Now, you don't want to come across as a dictator, so if there are design decisions that others take that you disagree with but you don't feel will lead to poor systems being implemented, you may want to back down and let the designer go with what they feel is the right path.

Some process I recommend to follow to achieve this is to create a standard design template, with questions to be answered for every task/user story done.  Some examples of types of question for the template are any data structure changes (like new database columns, data types, etc), if APIs are used describe any interface additions/changes, and high level description of automated testing that will be written. Having this template is not enough though. You need to ensure that it's followed and this will require persistence, because not every developer wants to be bothered with this type of activity. A key part of this design is not just having developers do it, but share it with everyone on the team.  This will lead to better designs, the more people think about a problem, the more potential solutions will be discussed.  This is also a great way for less experienced developers to learn from the senior developers.

Part of this process can be to require the design be done and reviewed with the team before coding starts on every task.  Many leads do not like issuing mandates like this, and in some environments it's not necessary, but if you feel this will help, by all means institute these processes.

Don't let the design reviews stop at just the team! Reach out to other leads, managers, architects outside the team, etc to get more input for sufficiently complex designs. You as the lead will need to be the one to initiate these conversations.

Ensure future work is properly lined up

This means getting ahead of requirements. If product managers are not ready with upcoming requirements for the team, continually try to work with them to get requirements ready. Offer to help. If they are not willing to let you help, or cannot get things lined up, escalate to your manager and/or their manager.  I don't want to prescribe a specific process, but you as the lead need to help to ensure the work is lined up.  Dropping a ton of requirements on a team doesn't mean the team should start working on these right away - work needs time to be thought through, planned, and designed.

Work with product management to ensure that requirements coming in to the team are structured for maximum efficiency in accomplishing high level project objectives

You should be working with PMs prior to planning meetings on this. Always think of better/more efficient ways to accomplish the end goal and suggest this to PMs (and designers if it's a UI heavy thing). Suggest ways to get to an MVP sooner.  Challenge yourself to always do this even if the idea seems weird/unusual.  Product management working on their own to come up with the path to get to the end goal usually does not lead to the best path.  The more input on this, the better.  Feel free to involve others on the team in the discussions too!

Work with product management to ensure that work coming in to the team is sensible and is a valuable use of the team’s development time

If you don't think the work makes sense for the team to do, that it does not seem like it's worth sinking the time in to, talk to the product manager.  You probably aren't the only one thinking of this.  Push them to come up with the business value and explain it to the team.  Organizational politics can come in to play here, but don't shy away from questions because of this  Getting a project that won't provide much value put on the shelf provides huge value for the organization!

Work with product and project management to ensure project objectives are understood by the team

The more the whole team understands the objectives and feels a sense of ownership, the better ideas will be proposed, and the more motivated the whole team will be (more motivation = happier team members = more output).  This isn't just the responsibility of the project manager or product manager. As lead you need to ensure the whole team understands project objectives.

Ensure builds and releases are stable and go smoothly

This can vary a lot based on how the company is structured with devops, SRE, and development teams.  But you as a lead should ensure there is some sort of  continuous integration/continuous deployment system, and that these are kept stable.  If there is no easy/central build and deployment pipeline and system, you've got some work on your hands!

Keeping this stable is crucial.  When it's time to get a developers code pushed out, whether for internal testing, or to production, it must be easy to do.  I've seen builds fall in to disrepair and unstable automated tests cause people to bypass all automated tests.  If you're lucky you'll have someone on your team that really cares about this and will help others when things start to fail.  But, if you don't, you'll need to take this up yourself.  But I don't want to prescribe any specific solutions, different organizations take very different stances on this.

Ensure proper communication and collaboration is happening between different groups (development, QA, product management, project management, managers) on the team, and that relationships are positive

Some ways that you can ensure this is:
  • Work to get more communication to happen between individuals on the team - you'll probably need to continually push developers to go talk to others on the team and not rely on chat rooms too much.  
  • Make sure that all stakeholders who could have an interest are in relevant discussions/meetings. This is not just the job of the scrum master.
  • Interject yourself in communications that appear to be turning negative to keep them positive and focused

Ownership over services/functionality that your team owns

In most organizations, teams "own" different areas of functionality of the applications that are being used in production.  The team isn't necessarily actively working on all of these areas but should be considered the owner.  What ownership means can vary wildly but typically involves investigating and fixing bugs found, monitoring for functionality degrading/breaking, addressing performance/scalability, and planning future major enhancements.

This can be difficult when code is long lived and teams frequently change.  As the lead, you should be the primary point of contact for these areas. So when inquires come in, you need to be able to answer them yourself or point to someone who would know.  If you don't know where to start, that's a good sign that there needs to be knowledge sharing and more documented.  As the lead, it's your responsibility to ensure that there is knowledge spread so there isn't one single person with all of the knowledge of something. For anything that your team owns, if a high priority bug comes in, your team should be able to quickly diagnose and fix, regardless of who is on vacation. If this is not the case, work to make it true.

Taking care of cross team requests

Lots of random things come up from people outside of the team. Ensure that these are given the appropriate priority and are not ignored. Many times, teams view these as distractions, and scrum masters sometimes view it as their job to shield teams from anyone outside of the team. While this may be best for short term project gains, it's not the best for the overall organization. As the lead you can help here by being a primary point of contact to ensure that others outside the team are getting the necessary support from your team, without distracting the individual contributors from their current priorities any time there is any external request.

Ensure team, and individual team members, are continually striving to improve productivity and deliver more

Never settle on a "good" velocity. Always try to push the team to deliver more, and be smarter about how they do their work.

If you feel individuals on the team can do more, push them to do more. Give them more assignments.

Manage upwards to ensure that management above the team is in the loop and able to help

Some ways you can do this are:
  • Escalate team issues to your manager and seek out their guidance/advice. You should not try to always paint a positive picture of everything to management, this will shield issues that they can help with and make it look like you are not doing enough to get the team to improve.
  • Ensure your manager, and any other relevant stakeholders, knows as early as possible when there are risks to missing commitments that the team has made.  The earlier risk is identified, the better action can be taken (maybe no action is taken but people won't be surprised/taken off guard when commitments are missed)
  • Ensure that the manager of every developer on the team knows of that developers progress.  You should be able to give this manager a frank assessment of the developer. Don't try to shield them from their manager, if they are struggling their manager should be involved to try to help. 

Coding

You're probably not going to get a ton of dedicated time to write code. But still try to set aside time to write code and deliver functionality that the team is working on. If you go too long without writing much code you'll lose touch with what the developers on the team are going through. I recommend  that every lead take on at least one development task per work increment (like sprints).

However, it's probably best to not commit to delivering things that are extremely time sensitive, or could put things at risk if it slips a few days. Try to work on relatively independent pieces so if you are not able to get much time for several days it will not impact the team much.

Conclusion

Well, that's a lot to discuss. I hope you found this valuable, whether you are considering being a lead, you are a lead, or you are managing/leading leads yourself like me.  Writing this up and sharing it with my leads has really helped them to grow in to the role and become great leaders of their teams.  If you have any different ideas, or other types of lead responsibilities that you can think of, I'd love to hear it in comments!

Tuesday, May 26, 2015

My recommended developer interviewing and hiring practices

Throughout my career, as a software developer and manager at growing companies, I've interviewed a lot of candidates for different software developer positions. Everything from college interns to CTO.  I've also been on the other side of the table, interviewing with lots of great companies.  I've learned a lot about I think what works and what doesn't work, what are good ways to evaluate developers, and wanted to share my experiences.

My Ideal Process

First, let me share with you my ideal interview process. I'll get in to specifics of why I think this process works great later.
  1. Recruiter or HR does the initial filtering of candidates/resumes. They need to work closely with a high level software development manager or VP to get a sense of what they are looking for.
  2. Before the recruiter reaches out to candidates, a hiring manager (software development manager/director/VP that needs the position, NOT a recruiter or HR) must approve the candidate. This will help HR/recruiter to learn more about what you are looking for.
  3. Recruiter/HR reaches out to the candidate and talks to them. They discuss a little about the position, company, company benefits, etc. This weeds out candidates that are not interested at all (salary requirements too high, not the type of job candidate is looking for, etc)
  4. Recruiter/HR sets up a technical phone screen with a senior developer or manager.
  5. In the phone screen, coding questions are asked, along with general background questions, maybe a design question. This phone screen is to weed out candidates that are obvious non-matches.
  6. If they pass the phone screen, they come on site (provided the position is not remote). There are at least 4 different interview slots (no less than 45 minutes each). Interview slots should be individual, no double teaming, or committee interviews (except for shadowing and training people on interviewing). Some coding questions are asked during on site interviews, some design, management questions if a manager position, etc. A variety of types of roles interview the candidate - at least one manager, at least one developer. 
  7. After the on site interviews, feedback is gathered from everyone who talked to the candidate (including the phone screener and recruiter). Feedback should be collaborative, ideally in a meeting. Every interviewer must give a yes or no recommendation.
  8. The hiring manager makes a decision on whether to give an offer. To give an offer there must be a strong consensus from everyone who interviewed. There should be no strong objections from anyone, and most interviewers must feel strongly about wanting to hire the candidate.
  9. Hiring manager decides what position to give an offer for and works with recruiter on salary/bonus/options/etc. There should be a developer career progression ladder established already with positions, and salary given in the offer should fall within the bands for the position. Decision on what position/salary to offer is based on where hiring manager sees them fitting (at least partially based on interviewers' feedback), plus what the candidate is looking for. Hiring manager may need to have a follow on conversation with candidate to determine this.

My Recommendations


#1 - Do a Phone Screen First!

An on site interview is a big time sink, both for the candidate and the interviewers. It really sucks to go through an on site interview for someone who is obviously not a fit. A technical phone (or Skype) screen with a developer or development manager is a great way to weed out candidates, saving everyone time. Here are some tips:
  1. The phone interviewer should be a senior developer, or dev manager, with a lot of experience interviewing. Someone you can trust to do a good interview. If the phone interviewer says no, no one else gets to talk to them, so you have to trust that this interviewer isn't saying no to potentially good candidates.
  2. Ask coding questions! Just because you're over the phone doesn't mean you should stick to just discussion questions. If you're hiring someone to write code, you don't want to hire someone who is terrible at writing code.  I've interviewed (and worked with) lots of people who had great resumes, could talk all day long, but were awful at coding and couldn't get anything done - asking a coding question can weed these people out so you're not wasting more time. I'd highly recommend https://codinghire.com/ for interactive coding - it has tons of great features. A shared google doc is OK as well if you don't want to spend the $ on coding hire (although I think it's well worth it). Just be sure you use something to watch them code interactively. I'll talk more later in this post about what types of coding questions are good.
  3. Be more forgiving in phone interviews than on site interviews. If you aren't sure whether the candidate should pass or not, bring them on site. You can take a firmer stance in on site interviews where other interviewers will get an opinion.

#2 - Research the candidate!

Every interviewer should spend just a little time googling the candidate before the interview. Here are some things to look for:
  1. Github - Having a Github account is a plus (and they should have that on their resume). Having their own projects, and/or contributions to other open source projects (not just forks with no changes), is a huge plus. This generally shows a passion for being a developer, and could show that they will be looking for outside solutions for problems, and keep up with new technologies.
    But, not having a Github account shouldn't be considered a strike against someone. There are tons of good developers out there that don't have any publicly available code for a variety of reasons . So only look at publicly available code as a bonus. And a big Github repo with lots of projects isn't always a good thing. It might mean that the person gets bored and can never keep their attention on a single thing. I've worked with plenty of developers like this, that had impressive Github repos, and were always talking about new things, but could never get projects finished.
  2. Side projects - Having side projects, like a website that they developed and host themselves, a mobile app, etc. is generally a positive. This shows that they are passionate enough about writing software and learning new things that they'll invest a lot of time on the side. Unlike Github repos, this shows that they can actually produce working projects. Like Github accounts though this should only be considered a bonus, and not a negative if someone doesn't have one. You'll want to make sure too that the side project doesn't occupy too much of their time - you don't want someone spending their work day on their side project!
  3. Blog posts/Twitter posts - These can show you some specifics of what they've worked on, what they're interested in, etc.

#3 - Get a good lineup to interview

Unless your company is tiny, you should have at least 4 people interview each candidate that you give an offer to. Different interviewer opinions/perspectives are very important here. Interviewers should have diverse roles too - not all managers, and not all developers. The lineup should definitely include at least one person the candidate would be working with on a day to day basis. It's never good to have high level managers do all of the interviewing.

Following these will ensure the best possible evaluation of the candidates, and give the candidates a better understanding of the company and position.

#4 - Only one person interviewing the candidate at a time

No double teaming of a candidate, or even worse, committee interviewers. This puts too much pressure on the candidate - they'll get no mental breaks in between questions, and it feels a little too hostile. You'll get a much better and personal evaluation of a candidate if it's one on one.

The exception here is if you're training people for how to interview, either by them sitting through your interviews to learn, or you evaluating them. But for these setups, only one of the interviewers should be asking the candidate the questions, the other is there to observe. Explain that the candidate upfront too.

#5 - Think of good coding questions to ask!

Like I said in #1, you need to ask coding questions. You're hiring someone to write software, why wouldn't you ask them to write code to evaluate them? You want to be in the room with them to observe how they solve the problem and answer questions - don't give them a problem and leave them alone for a long time. Whether or not you have a computer for them, writing on a whiteboard vs. paper, etc. doesn't really matter. What matters is that you have good questions to ask and you watch them solve them. You want to be able to say "If a developer did poorly on my question, they're probably not a good developer".  Not "If a developer answered my question well they're a ROCK STAR!" If your question is that tough, most good developers aren't going to answer it well.

Here are some things to avoid with your questions:
  1. No clever solutions - the problem shouldn't require a clever solution. If you ask a question that has only a clever solution, many good people won't think of the solution and totally bomb it.
  2. Not too long or complex - if it takes a good developer 30 minutes to complete your problem, that's way too long. You have to leave room for someone to start to go down the wrong path and correct themselves - good developers do this every day.
  3. No CS curriculum questions, unless you're interviewing exclusively entry level developers - Please, NO breadth first search, sorting, etc. questions. If a dev with 10 years of experience can't answer these types of questions that someone fresh out of school can, it certainly does not mean that the entry level dev is better. And really, how often in your day to day development do you need to write something like a breadth first search? And stay light on the CS concepts. Just because someone doesn't remember big O notation for different sort algorithms off the top of their head doesn't mean they are a bad developer.
  4. Language agnostic - The question shouldn't be too specific to a single programming language. You want to be looking for good developers overall, and for most positions, you don't want to require them to have extensive experience in your main language. You're limiting yourself too much there, and a good developer can learn another language easily.
  5. Watch glassdoor.com - sometimes people will post interview recaps on there and give away the questions you asked. If that happens change up your questions. You don't want someone to do well because they knew the answer coming in to the interview.

#6 - Ask them to talk through a solution for some problem

Software developers do more than just write code so you should ask more than just coding questions. For experienced developers in particular it's good to describe a problem or design question, and see how they can solve it. Ask them to design something, how they would diagnose and fix an issue, etc. These questions give an opportunity for developers with good real world experience to shine.

Be careful here though, you want the problem to be open ended. You don't want to have to spend a lot of time explaining it - if you do you're probably going to give them hints to what you are looking for. I am not a fan of describing a very specific problem in your product's domain and seeing how they answer - that requires too much explanation of your product's domain.

Also, you have to consider that candidates will all have a different background here, so you should be looking to learn something from their response.

#7 - Observe their thought processes for solving problems

For both the coding questions and talking through the problem, observe how they go about solving the problem. This is more important than the end solution. Do they try to come up with the perfect solution and don't come up with anything, not even mentioning good but not perfect solutions? This is an indication that the person might be smart but will have trouble getting things delivered. There are many other things to watch for when they solve the problems.

#8 - Culture fit is overrated

I've read many articles and heard many people talk about how critical culture fit is, that if a candidate doesn't seem to have the same values, motivations, etc. as the rest of your team, that they won't work out. That if it doesn't seem that they'd fit in with the rest of your developers, then you shouldn't hire them. Well, this line of thinking is really flawed, borderline discriminatory. If you're looking to hire developers you're growing, and you need to grow your culture too. You're never going to grow your culture by hiring the same type of people. And, you're REALLY limiting yourself and will have a much harder time finding people. If you find someone really good, who might not fit in with the rest of your team, what makes more sense - rejecting that person or diversifying your team? Great developers who will want to work for you are rare, you need to do everything you can to accommodate them, even if it forces some culture change at your company. What do you want the culture of your company to be - a place that tackles tough challenges and gets things done, or a frat house environment?

Something I've seen first hand and I'm sure is common is thinking less of someone because you don't think they will work the long and crazy hours that everyone else does. That's not a sign that the candidate isn't a fit, that's a sign that you need to change your culture to not be somewhere that people work crazy hours all of the time. That's a whole other topic of discussion though...

Now that being said, I do think there is value to someone being a "culture" fit, but I define culture as the technical culture. Like, if you have a culture of always looking for off the shelf tools/open source projects, researching and evaluating them, and incorporating them, and you're interviewing someone who has spent the last 10 years doing developing on the same stack and doesn't keep up with new technologies, that could definitely be an issue. Or, if your developers "wear a lot of different hats", and are involved in ops, doing both front end and back end, etc, and you're interviewing someone who just wants to do back end and doesn't have much interest in ops, that is definitely an issue.

#9 - Communication skills are very important

If you are working a team, the ability to communicate well, both verbally and written, is really important. You probably don't want to hire someone that cannot take verbal feedback and understand what you mean, or someone whose written communication is hard to decipher and does not make sense. I personally have seen this at multiple jobs, that someone might be a strong developer but if they have trouble with code review feedback, or keep interpreting the wrong things from descriptions of the work, that will cause problems.

Now if someone is really strong technically, you can live with some communication issues. But a lot of times these issues get discounted during interviews in favor of technical prowess, and they should not. You can see signs of this in the interview - if they misinterpret the questions you are asking, or especially if you answer questions for them and they still have the wrong interpretation, you're probably going to have trouble working with them.

Communication skills are especially critical if the position is remote, or if you have a distributed team.

#10 - Be courteous to the candidates

Throughout the interview, show courtesy to the candidate. Remember that you are selling the company, and if you're an asshole to the candidate, you're now a reason why the candidate won't want to come work for the company. If a candidate doesn't answer your question well, don't make them feel stupid. Of course you can answer it well because you came up with the question, but just put yourself in their shoes, and think to times you've been interviewed. If the candidate is just not doing well and there is no way you would recommend hiring them, still be nice to them. You don't want bad reviews of your company showing up on Glassdoor, and the easiest way to get these is to be rude.

#11 - Be positive and sell the company

Don't forget that the candidate is evaluating your company, and you want to make it sound like a great place to work (even if it's not). Be positive and upbeat throughout the interview! If every interviewer acts grumpy and like they're too busy to deal with an interview, the candidate is not going to want to come work for the company. Try to act like you're happy to work at the company and excited to bring someone else on to the team.

Now, you want to be realistic here, if you're too positive and don't mention any of the issues with the company if the candidate asks you, it's too obvious you're not telling the whole truth and that will be a red flag to the candidate. But always try to have a positive flip side for every negative thing you mention.

#12 - Collaborative feedback from all interviewers

When the interview is finished, you need to collect feedback from everyone who interviewed the candidate. Every interviewer needs to form an opinion on whether to hire or not - no "I'm not sure" feedback. Everyone's feedback is valuable so you need an opinion from everyone. Feedback should be detailed too, more than just a yes or no, so you can know why the interviewer has the opinion that they do. The hiring manager needs to consider everyone's input too - don't discount someone's opinion. If you're going to discount someone's opinion, they probably shouldn't be interviewing.

For things to work best this should be collaborative. A quick meeting with everyone who interviewed the candidate (including the phone screener, and possibly HR/recruiting) is best but many times this is too difficult to get everyone together for, so a group chat or having everyone email their feedback to everyone else is good. Everyone seeing everyone else's feedback is important because it can change the opinions. An example that I've gone through is that the candidate was a little careless in the coding question - they missed the edge cases, had logic backwards, etc. I kind of overlooked this and still recommended the candidate. But when I heard similar issues from all of the other interviewers I changed my mind - the issue was not isolated to just me. Without the collaborative feedback I wouldn't have heard this.

Something to watch out for with collaborative feedback though is that the interviewers are respectful of each others' feedback. No bullying or convincing interviewers to change their opinion. Timid interviewers might just go with the group to avoid a conflict but this is not good - you want everyone to feel free to share their opinion.

And, at the end of the day, the group does not have to 100% agree on a final decision. The hiring manager makes the final decision taking the interviewers feedback in to consideration. This is OK - you're never going to get everyone to agree all of the time, and if you do, you're probably bullying people to come to an agreement which will make it less likely they'll give real feedback which the hiring manager needs.

However, you don't want anyone to be strongly opposed to giving an offer to the candidate. A "well, if it were my decision alone I would say no, but, I can understand why we'd want to give the candidate an offer" is OK, but a "This is a really bad idea, I don't see how you all can think that we should hire this person, I'm not going to work with this person if we hire them!" is not good. Either the interviewer's ideas for what you are trying to hire for is not in line with the hiring manager (which is common for junior/mid level engineers), or you are discounting their feedback. Both of those can be solved.

#13 - Strong opinions are good

If most people who interviewed the candidate has a lukewarm response ("Yeah, I guess so, didn't wow me but did decent"), that is a red flag. You want most interviewers to feel pretty strongly about hiring the candidate. Chances are if most people do not, if/when you hire the candidate, they're going to be mediocre or bad. Lukewarm feedback is also an indication that the interviewers aren't committing, and aren't thinking about the evaluation enough. If an interviewer always gives a lukewarm response, you probably need to work with that interviewer to ensure they are giving their real opinion.

#14 - Have a career progression ladder and salary bands with each position

Salary negotiations with the candidate are OK, but you should have defined job titles, and salary bands for each title should be defined. If the candidate wants more money than the position, you need to bump them up to the next position and evaluate whether they would be a fit for that position. This is to ensure that your current employees are treated well and are being paid appropriately. Trust me, developers talk to each other about salary. I've seen things go really bad when the developers who have been there for years find out what a relatively new employee is making because they demanded a high salary. You want your current employees to feel appreciated and paid well, if they don't they'll leave or be undermotivated. Having defined job titles, and salary bands around each are the only way to ensure fairness to current employees. This can also help to make sure things don't go spiraling out of control in negotiations.

#15 - Hiring process should be a funnel

After you've been interviewing a lot of people, take a look back at numbers for your hiring process. How many people make it on site from the phone interview? How many people do you give an offer to from on site interviews? Ideally this should be a funnel. So for example, a third of the resumes/applications that catch HR's interest should have a phone interview, a third of those pass the phone screen to on site, and a third of those you give offers to. The number there (one third) can change, but the point is, at each step you should be filtering out a good number of candidates. If you are not, you probably are not evaluating tough enough. And if only 10% of candidates are making it past each step, you're probably being too restrictive.

You need a big history to evaluate this well though. For a 6 month period you may have just gotten really lucky or unlucky. So you need high numbers at each step to make a fair evaluation.

Final Thoughts

I hope you appreciate me writing this up. I'd love to hear if there are things you disagree with, or things you really agree with! More feedback for me is better, I love hearing different perspectives. 

Wednesday, May 2, 2012

Detecting non-ASCII characters in a git commit hook

If you don't want to allow non-ASCII characters in your code, which can appear when pasting text from Word, you can simply add a pre commit hook to git to check for this. Create a file called pre-commit in the .git/hooks folder of your code repo with the following contents, and change the permissions to user executable (chmod u+x .git/hooks/pre-commit), and git will halt when you attempt to commit if there are non-ASCII characters in the commit (binary files are not looked at). Git will also display the character(s) found, and show the diff of the file that includes the character. Here is what the pre-commit file should look like: If you need to add non-ASCII text that you know is safe, you can temporarily disable the script by running "chmod u-x .git/hooks/pre-commit", make your commit, then "chmod u+x .git/hooks/pre-commit" to re-enable it.

Sunday, April 22, 2012

My Trips Facebook app will not work after June 1

Starting on June 1, 2012, the My Trips Facebook app will no longer be available. This is because Facebook will stop supporting a technology, FBML, that My Trips is built with. Because My Trips is just a fun little side project for me that I did on the site, completely outside of my regular job, and the usage of My Trips is very low, I can't justify spending the time that it would take to redesign My Trips with a supported technology.

In a nutshell, FBML allowed me to pretty quickly create My Trips without having to specify font sizes, colors, etc. Things like the tabbed look of My Trips are possible with a very simple FBML command. When I started work on My Trips in 2009, Facebook was promoting FBML as one way to create Facebook apps. Had it not been for FBML, I probably would not have created My Trips. However, in 2010 Facebook started discouraging the use of FBML. I suspect this is mainly because it uses too many resources on their servers. I don't agree with Facebook's decision to completely abandon FBML, however, as a a software developer I can understand why they would abandon it.

I'd like to thank everyone for using My Trips over the years. If you know of any Facebook apps that provide similar functionality, please post a comment here!

Thursday, March 29, 2012

jQuery .on performance

jQuery's .on() function is very useful.  It allows you to bind event listeners for elements that haven't yet been created.  On pages where you're dynamically adding elements, this can make the code much cleaner and unobtrusive.  Rather than attaching the event handler to every newly created element one at a time, simply attach a class to all new elements, and call .on() for this class name with the event handler function once when the page loads for the first time.

.on() simply grabs the event when it happens at the higher level that you specify (usually document or a container div), checks if the element that caused the event matches any of the selectors for any added .on() calls, and if so calls your handler.

This functionality is also provided by .live(), but as of jquery 1.7, this function is deprecated. Use .on() instead.

tl;dr

Use .on()! Using .on() to capture the event over attaching a handler directly to each element has virtually no performance impact when the event is triggered, even when there are a huge number of unique elements with their own .on() handler on the page. However, using .on() does have a very noticeable performance advantage when generating/rendering elements. So any performance arguments against .on() are invalid.

Measuring Performance

Because of the way that it works, you may think that there is a performance hit to using .on() instead of attaching the handler to each element when it's created.  So I decided to do some extensive testing to see if this was the case.

I wrote a simple test page that dynamically generates lots of clickable elements.  See this page at http://coordinatecommons.com/jquery-on-test.html.

For each test case, there are two different measures of performance. First is how long it takes to dynamically generate the elements. When using .on, this is mostly the time to simply generate the DOM elements. However, when using .click to bind the listener one at a time, it takes longer because of the added step to attach the listener at this point.

The second measure is how long it takes for the callback to be called after clicking. For this, the time is how long between the parent container's mousedown event and the event handler being called. Because the initial time is on mousedown, there is some variability test to test based on how much time it took me to let go of the mouse button. So any result here can vary by 100-150ms, the results should not be analyzed any more precise than 150ms intervals. And realistically you can probably subtract on average 80-100ms from each of these to get the actual times.

Test Cases

  1. Generate 10,000 divs with the same class name, using .on - generate 10,000 of the same type of element that will all use the same event handler. Attach the same class name to all elements, one call to .on.
  2. Generate 10,000 divs with one of 100 different classes names, click handler using .on - 100 different event handlers, 10,000 total elements. .on is called 100 times
  3. Generate 1,000 divs with unique classes, click handler using .on - 1,000 unique event handlers for 1,000 elements. .on is called 1,000 times
  4. Generate 10,000 divs with unique classes, click handler using .on - 10,000 unique event handlers for 10,000 elements. .on is called 10,000 times
  5. Generate 1,000 divs with unique IDs, click handler using .click - attach an event listener to each element with .click as the element is being added.
  6. Generate 10,000 divs with unique IDs, click handler using .click - same as above but with 10,000 elements.

Tests 1 and 6 are the ones that will really evenly compare performance of attaching a handler to each element as it's added versus using .on.

Test Conditions

For Chrome, Firefox, and IE9, a desktop machine (quad core 3 GHz, 8 gigs of RAM) running Windows 7 Professional 64 bit was used. For IE6, 7, and 8, a Windows XP Virtualbox VM running on the desktop machine above was used.

Performance Results Table

Chrome 17 Firefox 11 IE9 IE8 IE7 IE6
Test 1 RENDER- 10K same class/handler .on 912 ms 271 ms 3020 ms 3142 ms 3668 ms 3877 ms
Test 1 CLICK - 10K same class/handler .on 70 ms 74 ms 110 ms 121 ms 110 ms 133 ms
Test 2 RENDER - 10K one of 100 class .on 1081 ms 344 ms 3270 ms 4857 ms 5732 ms 5965 ms
Test 2 CLICK - 10K one of 100 .on 94 ms 114 ms 111 ms 131 ms 137 ms 95 ms
Test 3 RENDER - 1,000 unique classes .on 328 ms 164 ms 832 ms 1483 ms 1385 ms 1021 ms
Test 3 CLICK - 1,000 unique classes .on 140 ms 162 ms 107 ms 140 ms 107 ms 120 ms
Test 4 RENDER - 10,000 unique classes .on 2772 ms 1397 ms 14050 ms 15602 ms 47609 ms 29614 ms
Test 4 CLICK - 10,000 unique classes .on 245 ms 252 ms 149 ms 421 ms 409 ms 442 ms
Test 5 RENDER - 1,000 unique ID .click 281 ms 175 ms 898 ms 1983 ms 2133 ms 2023 ms
Test 5 CLICK - 1,000 unique ID .click 106 ms 112 ms 100 ms 103 ms 100 ms 90 ms
Test 6 RENDER - 10,000 unique ID .click 2826 ms 1576 ms 14618 ms 50673 ms 65835 ms 66606 ms
Test 6 CLICK - 10,000 unique ID .click 80 ms 113 ms 106 ms 94 ms 100 ms 130 ms

Results


Using .on() to capture the event over attaching a handler directly to each element has virtually no performance impact when the event is triggered, even when there are a huge number of unique elements with their own .on() handler on the page. I expected there to be at least some noticeable lag in the click times when there are 10,000 unique elements, but it was only noticeable on IE8 and below and just barely noticeable. And, that's with using .on() in a way that it shouldn't be used. Test 1 is the way that .on() should be used, and it performs wonderfully. Times are identical to test 6, where each element has a directly attached handler.

However, using .on() does have a very noticeable performance advantage when generating/rendering elements. This is obvious in test 1, the render times for the same number of elements is anywhere from 7 to 17 times faster than attaching the handler to each rendered element!

So based on this my recommendation is to use .on() to attach event handlers any time there will be more than one element added with the same function used for the handler.

Other observations


Another thing I found interesting is that on nearly all tests, Firefox is the fastest. Chrome is definitely behind Firefox for these tests. Also, seeing the numbers for IE8, it's a real shame that nearly 25% of the world is using this browser. Microsoft did very little to improve performance in between 6 and 8, and performance improvements in 9 many times are very small. Microsoft, IE10 better be blazingly fast! And, please, work on getting Windows XP users to upgrade to IE10. Firefox and Chrome run perfectly well on Windows XP, your own browser should as well.