Redux vs React’s setState()

Sometimes you can get into a situation when you’re choosing between Redux and React’s setState(). When I’m doing such choice, I often use the following rule of thumb.

Imagine that your app can restore its state when you refresh the page.

Use Redux if you’d prefer to restore this specific piece of state.
Use setState() if don’t have this need

Basically, this is about the importance of a piece of state. Is it important enough to keep it across refreshes? If yes, use Redux. If no, setState() would do the job just fine.

Here’s how I’d make the choise:

Redux setState()
The selected value in a dropdown on a page The open/closed state of a dropdown
The current page of a book in a book reader app The visibility of toolbars in the app
The current level in Angry Birds The state of birds and pigs in the current level in Angry Birds

For me, the state in the left column is important, and the state in the right column is not. Your app can require a different decision.

This criterion is not universal – i.e. sometimes you might need to put the “dropdown is open” state into the store because you change another component’s styles based on this. But for me, it works in most cases.

Posting this on Reddit triggered a great comment:

Redux = data grabbing and global state that’s shared across more than one component.
setState = silo’ed state that is isolated from other components.

Yes! Another (and probably more common, as I’m realizing after receiving the feedback) criterion for choosing between Redux and setState() is how global the state is. So here’s another approach:

Use Redux if your state is shared across multiple components. Use setState() if it’s used only in a single component.

Exclusive code ownership (why it’s bad and what to do with it)

In December 2016, a front-end developer left a team where I work. Apart from other changes, this created an issue. When he was working, the project had a couple of complex modules we were developing with him together. When he left, I realized that only I am able to work with them – other two team members knew nothing about how these modules function. This is called “exclusive code ownership”, and it’s bad in the long term, so I had to start fixing this. Here’s why it’s bad and what to do with it.

🖐 🔮


Bus factor#

Imagine there’s a complex module in your project that only you work with, and you’re hit by a bus tomorrow. Will the team be able to continue developing this module the next day? Most likely no. This means the bus factor of your team is 1 – that is, it’s enough to kill remove a single developer from the project to severely slow or even stop its development.

Until recently, our project has been heavily relying on a custom front-end framework that was developed by only a single team member. No other developers knew its inner parts, and only a couple of them knew it deep enough. This meant the bus factor of the whole front-end team was just one. This, along with other issues, led us to migrate from this custom framework to React in 2016.


If there’s a module in the project that you exclusively own, then, regardless of the team size, only you will be able to make changes in it. If tomorrow you receive 5 critical issues related to this part, there’ll be nobody to help you. You may even slow down your team if these critical issues block them.

We had this problem when a part of our front-end infrastructure changed, and this broke builds for the whole team. Because it was only me who knew how all this stuff works, I had to fix the problems one-by-one. If other team members knew this, they could’ve helped me bring everything back sooner.

Leaving is hard#

If once you decide to leave the project, you’ll have to transfer all your unique knowledge about how these parts work to other team members. Depending on the knowledge size, this can take significant time, and it still will be less effective than if other members had real experience with the code.

You create risk and slow down delivery

What to do#

Documentation and maintainability#

You have probably heard this advice for a lot of times, but it’s never bad to repeat it again. Make your code documented and maintainable. This includes code cleanliness, inline comments, and external documentation.

It’s your choice how much to document the code. I usually try to make it self-documenting and add comments only if the purpose of the code or the decisions behind it are still unclear. Also, remember that self-documenting code is better than comments, and comments are better than external documentation. People occasionally forget to update the comments and often forget to update the external documentation, so the only up-to-date source of information is code.

Commit messages#

In my experience, writing detailed commit messages is very underestimated. I’m sure that one of the best ways to share your code knowledge is to include it into your commits. Once you commit something, it stays in the history forever, and tools like git blame help you to easily find which commit changed the specific lines. This means that even if you leave the project, the other developer that will work with the piece you’ve written will be able to git blame the code and see why you made this or that change.

What to write in a good commit message? Well, this is one of the examples from my project I like (I wrote this so I like it, pretty obvious):

Look, this commit includes:

  • the issue number (so that another developer could go and see the business requirements behind the change),

  • the short description of the change (which is usual),

  • and the longer text that explains the commit.

The last part is exactly where you put your code knowledge. Answer why you did the changes. Explain the background that could be unclear to another developer in the half of the year. Document the limitations that your solution has and the issues it creates (if any). Include everything else that you consider significant.

Follow to this Caleb Thompson’s article if you want to read more about good commit messages (especially, take a look at point 4). Also, kudos to Andrey Listochkin for giving the advice about this in one of his talks.


If you’re a team lead, and you realize that you have this problem in your team, you can use your power to change this:

  1. Find what modules are exclusively owned by someone.
  2. Find what tasks you can give to other team members so that they get experience with these modules.
  3. Ask the module owner to introduce developers that will work with the tasks into the module code.

If you’re a regular developer, and you realize your team has this problem, talk with your team lead. Feel free to show them this article.

Also, this should be solved by introducing mandatory code review or pair programming, though we haven’t practiced this ourselves.

Docs, commit messages, processes


So, well, the checklist:

  • Write self-documenting code

  • Use comments when it’s necessary

  • Write good commit messages answering “why”, not only “what”

  • Involve other team members into working with the code

Follow me on Twitter: @iamakulov

How to install Google Chrome on Travis (in a container-based environment)

Updated 22 Jan 2017: removed the “addons.apt.sources” because looks like the google-chrome source is now provided by default.

Here’s the code. Scroll down for more details:

dist: trusty
sudo: false

      - google-chrome-stable

  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start &
  - sleep 3

What’s this#

Travis CI is a popular continuous integration tool. It can be used to run tests when the project code changes, automatically publish changes to the repository, etc., etc. With this config, we configure Travis to install Chrome for each execution. This can be used for e.g. automated UI testing.

Container-based infrastructure#

Since the beginning of November 2016, you can run Ubuntu 14.04 on Travis CI in a container-based environment. A container-based environment is cool because it noticeably speeds up builds, has more resources and more. The drawback is that you can’t use sudo, but Travis has replacements for some common use-cases.

What does this do with Chrome? Chrome can’t be installed on Ubuntu 12.04 which just recently was the only available container-based environment. If you were doing UI testing, you either had to use Firefox (geckodriver) which was buggy as hell or had to accept much longer build times. ¯\_(ツ)_/¯

This config does enable the container-based infrastructure.

What does the code mean#

sudo: false
dist: trusty

sudo: false switches the environment into the container-based mode (and disables sudo). dist: trusty switches the distribution to Ubuntu 14.04 (the default one is 12.04). Here’re the other field values if you need them:

      - google-chrome-stable

This installs Chrome. Travis provides the apt addon which is a handy way to install necessary packages. (Also, it’s the only way possible in the container-based infrastructure.)

The packages part of the addon specifies to install the google-chrome-stable package. This package gets installed from the official Google Chrome source which seems to be enabled in Travis by default.

Here’re the docs for the apt addon:

  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start &
  - sleep 3

This starts xvfb (X Virtual Framebuffer) that imitates the display and makes Chrome think it’s run in a GUI environment. sleep: 3 is required to give xvfb time to start (this is what Travis recommends to do). If you need to set the screen resolution or the pixel depth, check out the docs:

Did this help you? Follow me on Twitter: @iamakulov

npm 4 is splitting the “prepublish” script into “prepublishOnly” and “prepare”

Updated 24 Jun 2017: reflected changes in plans about npm v5..v6.

On October 20, npm is releasing v4.0.0. This release, apart from the other breaking changes, also includes one affecting a lot of packages: the prepublish npm script will be split into prepublishOnly and prepare.

📅 💥


In v1.1.71, npm made the prepublish script also execute when you run npm install without arguments. Before this, npm ran the script only before you publish a package.

The reasoning behind this isn’t clear, but as far as I’ve got it’s the following. You usually run npm install without arguments when you clone a package, go into its directory and try installing its dependencies. If you’re doing this, you’re most likely a developer, and you’re going to do something with this package; therefore it’ll be useful to prepare the package for using. Since the prepublish script often includes commands that build the package for publishing (= prepare it for using), npm decided to execute it in such case to do this job.

However, this prepublish behavior became disliked:

  • It’s weird. prepublish is pre + publish, and a lot of people didn’t assume it’s also being run when installing dependencies. I even thought it’s a bug in npm when I first discovered the way it works.

  • It creates problems. Many projects I’ve seen put building and testing commands into the prepublish script. Even npm recommends doing this. It’s convenient: it prepares the package for publishing and prevents an occasional release of broken code.
    However, if you use a CI environment like Travis or Appveyor which installs a new version of the package on each build, things become worse. Your build and test tasks get executed twice, once on npm install and once on the actual npm test. And this creates you problems such as increased build time or wrong build status.

Current behavior is disliked

What’s next#

npm 4 is splitting the prepublish script into two new: prepare and prepublishOnly.

  • prepare will have the same behavior that prepublish has now. Commands in this script will run both before publishing and on npm install.

  • prepublishOnly, as seen from its name, will run only before publishing the package.

  • Also, prepublish will receive a warning about the changes.

npm realizes that prepublishOnly is an ugly name for a script, Therefore, there’re also some other changes planned for the later releases.

  • npm 5 will un-deprecate previously deprecated prepublish, make it run only before publishing, and will deprecate prepublishOnly. In this release, prepublish will get back its expected behavior.

  • npm 6 or later will remove prepublishOnly completely. After this, two scripts will remain: prepublish which will be run only before publishing, and prepare which will be executed both on before publishing and on npm install.

Update 24 Jun 2017: the plan above was true at the moment of publishing, but seems like npm doesn’t follow it now. See a twitter thread with @maybekatz for more details.

There will be changes in npm 4…6

What to do when npm 4 is out#

Decide based on what your package needs.

  • If your package’s prepublish script contains commands that should only be run before npm publish, rename the script into prepublishOnly. Examples of such commands are npm test and eslint.
  • If your package’s prepublish script contains commands that should run both before npm publish and on npm install, rename the script to prepare. Examples of such commands are commands that build your package (e. g. webpack or babel).
  • If you don’t know, rename the script to prepublishOnly – this is the behavior most people expect. (As another option, you can leave everything as-is, but there’s no point: it will just delay the decision, but not remove it.)

P.S. If you like this, follow me on Twitter. I tweet selected articles and my own thoughts:

React anti-pattern: don’t make <li> your component’s root

Noticed a React anti-pattern while mentoring a younger front-end developer on our project.

Imagine you have a component that represents a list of articles. At one point, you realize that each article in the list is too complex:

const ArticleList = (props) => {
  return <div class="articles">
    <ul class="article-list">
      { => 
        <li class="article">
          <h2 class="article__title">{ article.title }</h2>
          { /* 25 other tags */ }
      ) }
    { /* ... */ }

So you decide to move the item to a separate component. You take that code inside map(), extract it into <Article> and get something like this:

const Article = (props) => {
  return <li class="article">
    <h2 class="article__title">{ props.title }</h2>
    { /* 25 other tags */ }

const ArticleList = (props) => {
  return <div class="articles">
    <ul class="article-list">
      { => 
        <Article title={article.title} { /* other props */ } />
      ) }
    { /* ... */ }

Don’t do it this way. This approach is wrong. The problem is that by taking a <li> and making it a root of the component, you’ve just made your component non-reusable. This means that if you’d like to reuse <Article> in another place, you’ll only be able to apply it somewhere inside a list – because of this <li>. If you decide to render <Article> into e.g. a <div>, not only will this be non-semantic (<li> can only appear inside of <ul>s), but will also add unnecessary list item styling which is super weird.


The solution is simple: move the <li> back into the <ArticleList> component and make the <Article>’s root element a <div> or something else. This will probably require some refactoring in your styles, but will make the component reusable. Look how cool:

const Article = (props) => {
  // Notice: <li>s are gone
  return <div class="article">
    <h2 class="article__title">{ props.title }</h2>
    { /* ... */ }

const ArticleList = (props) => {
  return <div class="articles">
    <ul class="article-list">
      { => 
        <li class="article-list__item">
          <Article title={article.title} { /* other props */ } />
      ) }
    { /* ... */ }

// And now you can easily render an article inside of a sidebar – have no idea why though
const Sidebar = (props) => {
  return <div class="sidebar">
    <div class="sidebar__article">
      <Article title="Look ma" { /* other props */ } />
    { /* ... */ }

UI testing, Selenium, mocking AJAX requests and Likely

Likely are well-designed social buttons:

I’m currently working on covering them with UI tests using Selenium. So far, several notes:

  • Selenium is cross-platform, but different platforms support different sets of functionality. Node.js isn’t the most complete one, unfortunately. If you google how to do something with Selenium, find a StackOverflow reply with the Java API and try to do the same in JavaScript, don’t expect it will definitely work. That API could just be absent.

  • Selenium 3 is coming, but most tutorials focus on Selenium 2. Selenium 2 was released in 2011, and version 3 is expected to be released this year. In fact, the selenium-webdriver npm package already installs 3.0.0-beta.2. There’re no major breaking changes between 2.53.2 and 3.0.0, but expect that some tutorial code could just not work.

  • The selenium docs are scattered between different places, and it’s hard to find the right thing when you’re googling something. One part of the documentation is at, another is in the repository wiki, etc. It was quite hard to find the proper actual API docs for JavaScript, so here are they:

  • Mock the external services when doing integration tests. When testing the sharing counters, we rely on responses from the social networks. Turns out these services don’t always work well (especially Facebook and Google+) which makes the tests fail. Viktor Karpov suggested to mock the responses, and it seems it’s a default way of doing the integration tests which I didn’t know. I’m working on this now.

  • Mocking AJAX requests with Selenium is hard. I need to mock them to simulate the social network responses (see the previous point). So far, I’ve only found two libraries that can help with this: Sinon.js and xhr-mock. Sinon.js is popular and heavily featured, but it has quite a complex API, and I haven’t yet succeeded in making it work. xhr-mock is way simpler and can also mock only specific URLs (which is more complicated with Sinon), but it doesn’t support XMLHttpRequest.prototype.addEventListener and doesn’t have a UMD build. Sadly.

You can follow the pull request I’m working in to stay tuned (and see how we manage to do the AJAX mocking):

Asking good questions when mentoring a person

We have a new Junior/Middle front-end developer on our project, and I’ve been mentoring him for a couple of days. My responsibility as a mentor is to help the developer understand the most parts of the project in the shortest time. As a part of achieving this, I regularly ask him two things in our conversations:

  • “Do you have any questions?” and
  • “Have I explained this well?”

“Do you have any questions?”#

This one is pretty obvious to ask, but I try to repeat it as often as I can so any questions the developer comes up with are answered quickly:

  • Arriving at the office in the morning: “Hey, how’re things going? Do you have any questions?”
  • Leaving after assigning a task: “Feel free to ping me if you have any questions”
  • Sitting with a mentee in a pause while waiting for a module to build: “By the way, do you have any questions about the system or is something not entirely clear?”

The point is to make the developer super-comfortable to ask any questions he gets. The more he asks, the quicker he learns.

“Have I explained this well?”#

I find this exact question extremely useful, and I feel like it’s my little invention (haven’t seen anyone using it yet). I ask it every time I explain something to the mentee to make sure I’m clear, and the mentee’s level of knowledge is enough to understand the stuff:

  • “So this is how $module.require works… Have I explained this well?”

I prefer this question more than “Do you have any questions?” because the latter feels too official to me in this context.

The point about the question is that you should ask it in exactly the specified form. It’s easy to transform it to “Is this clear to you?” or “Do you get this?”, but you shouldn’t do this. All these latter questions are about the mentee and their perceptional abilities and could make them feel insecure if they answer “No.” This decreases the communication between them and you and is definitely not your goal. I only find the “Have I explained this well?” form to be not about the mentee and therefore be safe.

Bonus question: “Can you explain this to me?”#

This one is useful too. I ask it to check that the mentee fully understood the topic we were talking about. If the developer answers well, then it’s cool; if he doesn’t, I notice the points where he gets stuck, and help him with them.

Not only mentoring#

These questions are useful not only when you’re mentoring someone, but also when you give a talk, a lecture, conduct training, etc.

For example, a lot of speakers only answer questions at the end of their speech, when a half of the listeners have already forgotten what they wanted to ask about. I tend to ask “Do you have any questions?” after each major part of my talk. This has two advantages:

  • attendees understand the speech better because we speak of the material that was on the screen just a moment before that,
  • and there’re fewer situations like “Could you please switch to slide 2” when you’re on slide 53.

I used this approach this with “Redux in real life” and in a company’s internal React training, and seems like it worked well.

“composes:” in CSS Modules

On our project in EPAM, we’re currently busy with integrating React stack into our infrastructure. Had an interesting discussion about CSS Modules today: should we use composes in our code?

Several interesting points:

  • composes is an official way to compose classes, however, it breaks compatibility with the real CSS. If we use composes, it’d be hard to migrate from CSS Modules to something else in the future.
  • Shadow DOM is cool and should be released in the major browsers pretty soon. A colleague believes once Shadow DOM becomes widely available, CSS Modules will get much less popular pretty soon. A major advantage of Modules is the incapsulation of styles, and Shadow DOM does this natively. Therefore, using CSS Modules in a long-term project is probably a bad idea.
  • There’s no official documentation on why composes was chosen over the traditional way of combining several classes, like <button class=”button button-green”>. Probably I haven’t found it. I believe composes exists because it is cleaner than combining several classes:
    <!-- With composes -->
    <button className={styles.buttonGreen}>
    <!-- Without composes -->
    <button className={styles.button + ' ' + styles.buttonGreen}>

    but the real decision is still unclear.

(Originally published on Tumblr.)