Journocode is all about data journalism, including the possibility to make good journalism with statistical know-how and digital skills. Finding stories in data you wouldn't have found otherwise, showing complex correlations in a more understandable way, pimping up "boring" numbers to make them exciting. Sounds awesome!
But where to start?
Daniel Angus teaches data journalism at the University of Queensland and shares his five tips for DDJ newcomers.
1. Journalism requires journalists
Good data journalism fundamentally requires good journalists. This sounds bleedingly obvious, but when new technologies and technological practices enter workforces where human capital costs are high there is often a tendency for management to try to use those technologies to reduce labour costs. Data journalism does not lend itself to such rationalisation, and should instead be seen as a set of practices that can enhance the quality of storytelling when done well. Data journalism still requires human editorial judgement: What data do we need? Which patterns are meaningful? How can we best communicate this story to our audience?
2. "Everything is a remix" – Kirby Ferguson
The internet is rife with examples of great (and not so great) data journalism. Use this as inspiration for your own stories. If you see a story that you like then decompose it and reflect on what makes it tick. How can you remix these elements in an upcoming story? Likewise, critique your own work, and that of your peers to understand how stories can fall short of your expectations, and learn from it. Journalists are very good at this form of reflective practice.
3. Don’t reinvent the wheel
Similar to the remix idea above, get good at creating data workflows, and get good at sharing them. Need to scrape a years’ worth of political party speeches and then work out the geographical place names that were mentioned? Someone, somewhere, has already done it. Find a community of practice online that is willing to share these data workflows, ideas and tips. Or locate a local hacker community and seek out their advice.
4. Data cleanliness is next to godliness
Missing values, errors in measurement, mismatched data fields, different naming conventions… expect that data from the wild is going to be unclean. Ask a data journalist how much time they spend on cleaning data and watch their eyes roll back into their head. A consequence of the fact that we are digging around in places where people don’t want us looking is that the data we get will require lots of manual effort to get it ready to use. There is no easy way around this other than to take heed from above and get good at building workflows to help you maintain your sanity. The bulk of your time will be spent cleaning data.
5. How "open" is open data
While I’m supportive of government open data policies, I’m also sceptical. While stories can be hidden in plain view, in the current political climate it is still more likely that we will have to fight to get the data we need for our data-driven stories. Open data can be great to augment stories, but I don’t want to see open data distracting us from going after stories that are located in data that is hidden from view.