Over the last month, I’ve had the opportunity to work on a development team for Human Rights First (HRF). HRF is an independent advocacy and action organization that challenges American leadership to live up to its ideals. HRF works towards securing core freedoms to all Americans by demanding reform, accountability and justice from the U.S. Government and private companies.
Our team worked on developing a visualization that showcases instances of police use of force (from a data science model) that can help classify possible instances of brutality.
Going into this experience, it was my first time working on a web development project in 2 months. So I took this opportunity to shake off some rust with my technical skills, which was a challenge itself after focusing most of my time learning python and algorithms for those last 2 months.
Starting off in the project timeline, we began with creating a product roadmap consisting of shippable tasks that we wanted to implement in the final release. We utilized a Trello board to organize these tasks and keep each other updated on which tasks were either in progress or completed.
You can see here, in this example, one of the features we have is the ability for the user to reset the filter parameters. The tasks included in this feature include:
- creating a button on the filter overlay
- storing a reference to the default filter parameters
- setting the filter back to those default parameters when the button is clicked.
This process helped break down exactly how we wanted to build and ship out features, and was done throughout each feature we wanted to implement.
Working the Development Process
Working on the backend team, our main focus was creating the bridge from data science to the front end. This was extremely interesting to work on as this was my first time on the backend that I had to work with a data science team.
Our first and biggest hurdle in the project was populating our tables with the data-set we were getting from the data science team. And again with this being the first time we experienced working with a data science team, this was actually a daunting hurdle.
We began the problem solving process by brainstorming the logic flow of how we were going to handle the data. Spending almost a whole day on working on the logic we finally figured out how we wanted to structure the flow. For me at least, when we finally figured it out (or at least a rough draft of it) I felt like I struck eureka (image below shows the visualization of this moment).
Now getting into actually building out the logic on code is a different challenge itself. Things never work out as perfectly as you think it will in the brainstorming process. But that’s what programming is about, right? If you never had to go through the process of having to throw stuff together and seeing what sticks to solve a programming issue, did you really solve it?
One of the specific challenges we faced was trying to add the links of the sources that are in each data point. The way we were getting the links from data science were in an array inside the data point object. If you didn’t know, arrays are not a valid data type in databases. So we couldn’t just plug in the array of links in to our data-table. Which led us to explore our options on how we wanted to handle the links.
We ended up de-structuring the data and sending it in 2 waves: first the data point we wanted to send in the ‘incidents’ table, then each link in the array of links in the ‘sources’ table.
So what we did was map 2 new arrays that consisted of the incidentsMap and the linksMap. For the linksMap we had to do a double map so that we can go into each incident and map through the links array. This way we were able to send in both the incidents and the links that were included in each incident.
Currently our product is, at least on a foundational level, connected and working together in terms of being able to send data from the data science side to the backend, then having the backend and frontend talking.
For a current run down of where we are in this project the Live Demo below will explain everything from the frontend, to the backend and even the data science side of things:
Our vision for the future releases of this project, we want to see a profile implementation that allows user to sign up and login. Giving the user the ability to save incidents on the site itself so that they can come back and revisit specific incidents in the future.
Some technical challenges may include using the Heroku Scheduler (their version of a cron job) to automatically update the data periodically. This system heavily depends on how the future data science team handles the update endpoint and how they send that data in. Worse comes to worse, a temporary solution would be to just have the Heroku Scheduler wipe the data and re-populate the tables with the new data-set every day or so. However this would be less efficient when the data-set gets extremely big, which would result the backend database to take a bigger hit.
This project, over the course of the month, has taught me a great amount of what it takes to work in a software development team. Working together, in a pair programming environment, has giving me the opportunity to work with other developers and their thought process while developing.
This project definitely has helped me further my journey as a full stack web developer. Being able to work with a data science team and working on connecting the web application with a data science API was a valuable experience that I can use in the future.
Overall, I was extremely proud of the work we accomplished as a team. Being able to take in an unpolished project and creating a solid foundation for future teams to work on top of is immensely gratifying.