I’ve been looking at Aakash’s awesome Firefox Input site to try to figure out how we are doing with Firefox 4. Unlike others who have sifting through individual reports and writing bugs, I’ve been trying to look at the data in aggregate. I did some adhoc queries over the lifespan of the Firefox 4 betas and thought the results were interesting. Of course, this is not scientific in any way but I think the insights are still valuable. I think some sort of triggers / alarms doing this analysis automatically should be added to input (bug coming soon).

What prompted me to look specifically at input data was a mention of bug 628872 on Twitter. It seemed like a bug that should block Firefox 4 but I wanted to know the extent of the problem. Rather than try to reproduce I went to input and saw the following:

Though likely not statistically significant, there has definitely been an uptick in negative feedback containing “iplayer”. This graph even helps to narrow down a regression timeframe. Recognizing the usefulness of this approach, I did a bunch more queries that came to mind.

I first decided to see if the YouTube player had a similar graph:

From the graph it is easy to tell that YouTube feedback has been relatively consistent, spiking every time we release.

In weekly Mozilla meetings there has been talk about a bug on Hotmail’s side causing problems for Firefox 4 beta users. Searching for “hotmail” I got:

Clearly users are feeling the pain and letting us know. Similarly, one of the top issues discussed in support reports has been copy and paste. Searching for “copy paste” gave me:

We’re on the case (bug 613915) and need to fix the issue before final.

Next I thought about bug 626016 (which is about Facebook chat) so I searched for “chat”:

The two spikes are interesting. My completely uninformed guess about the spike on August 12 is Facebook (or some other chat) going down or a server-side website release that went wrong and was quickly rolled back. The spike on the right is likely bug 626016.

Up next I looked at “netflix”:

This is another interesting graph. The left spike was likely due to the known issue of bad user-agent detection on Netflix’s side (bug 522957). The increased displeasure on the right is likely due to bug 598406. From that known issue “hulu” had similar sniffing problems which look to have been resolved:

This method can also be used to gauge general user sentiment. I knew the removal of the status bar was contentious, so I pulled up the graph for “status”:

Clearly you can see the initial displeasure when the change landed in a release and the resulting dropoff. Of course, there is still a level of sustained feedback which has prompted some additional product changes.

Finally, I searched for “apple” with no bug particularly in mind. I was pretty surprised to find this graph:

The spike seemed to spike and fade too suddenly to be a Firefox issue. I did a quick Google search for “apple october 21” and immediately saw what was going on. On the 23rd Apple reported their earnings. Such an event wouldn’t normally impact Firefox in any way, but Apple live-streams their earnings report. Because Apple is heavily invested in H.264 they streamed it using that technology. Firefox doesn’t support H.264, Firefox users couldn’t access the stream and thus were complaining. The complaints were only relevant while the live stream was relevant and disappeared the next day. Fascinating!

I found this sort of analysis interesting and thought provoking and am glad I have a tool like Firefox Input available to me (and the world!).

Tagged with:  

9 Responses to Mining input.mozilla.com for fun and profit


  2. Fred says:

    We are very glad you like Input so much, and thanks for showing the community how to extract useful insights from Input. We are trying to build the most useful tool for both our users and the people who are interested in what they have to say. So, collecting data is one thing — extracting useful results a whole other problem. I find your above examples just great!

  3. Alex Faaborg says:

    Awesome post, and it’s really interesting to see how the feedback tends against several specific flaws.

    This method can also be used to gauge general user sentiment. I knew
    the removal of the status bar was contentious, so I pulled up the
    graph for “status”:

    I think this is the only area where I would add a caveat. In cases where there isn’t a tradeoff being made. I think we can actually gauge general user sentiment, for instance we don’t lose anything by making copy and paste work 🙂

    However, I often see people saying “look at this graph of status, if we just revert the changes it will level again.” (not you in this post, but people in general). Here, there is a trade off. For instance look at these graphs in comparison:


    So to really gauge general user sentiment, we have to try to look at changes over a range of different dimensions. If we can make them all go positive, then we are clearly winning.

  4. Aakash Desai says:

    Would it be wrong to hug you?

  5. Majken "Lucy" Connor says:

    Very cool! I can see this helping a lot in terms of trying to find regression windows, and like you said, alerting to new problems. Alex has really great insights, too.

    BTW the facebook bug was actually a different one, where chat (and a couple other things) was entirely broken in the Jan 25th build. Don’t have the bug number offhand since it was noticed and fixed really quickly.

  6. njn says:

    Looks like the complaints about memory usage started when on November 10, when beta 7 came out:


    Beta 7 was the first release to include JaegerMonkey. Bug 615199 strikes again…

  7. So the next thing to do, surely, is to have some way of mathematically determining an “interesting” graph (e.g. one which goes low but then has a sharp uptick which continues until now). Then, have a tool which takes random words out of posts, queries on them and sees if the results are “interesting”. If they are, it files a bug with the query parameters and says “someone, check this out”.