Thoughts on the Potential of Open Data in Cities

The promise of Open Data has drawn most major US cities to implement some sort of program making city data available online and easily accessible to the general public. Citizen hackers, activists, news media, researchers, and more have all made use of the data in novel ways. However, these uses have largely been more information-based than action-based, and there remains work to be done in using Open Data to drive decisions in government and policy-making at all levels, from local to federal. Below I present some of the challenges and and opportunities available in making use of Open Data in more meaningful ways.

Challenges

Standardization and Organization

Open Data is dirty data. There is no set standard between different cities for how data should be formatted, and even similar datasets within a city are often not interoperable. Departments at all levels of government often act independently in publishing their data, so even if most datasets are available from the same repository (e.g. Socrata), their organization and quality can differ significantly. Without a cohesive set of standards between cities, it is difficult to adopt applications built for one city to others.

Automation

The way data is uploaded and made accessible must be improved. Datasets are often frozen and uploaded in bulk, so that when someone downloads a dataset, they download it for a particular period in time, and if they want newer data, they must either wait until it is released or find the bulk download for the newer data. This involves more human effort both in the process of uploading the data and in downloading and processing that data. Instead, new data should be made immediately accessible as a stream with old data going back as far in time as it is available. This allows someone to access exactly as much data as they need without the hassle of combing through multiple datasets, and it removes the curators need to constantly compile and update newer datasets.

Accessibility

Compared to the amount of data that the government stores, very little of it is digital and very little of what is digital is publicly available. The filing cabinet should not be a part of the government storage media. Making all data digital from the start makes it simpler to analyze and release. Finally, much of the data the government releases is in awkward formats such as XLSX and PDF that are not easily machine-readable. If the data is not readily available and easily accessible, it in effect does not exist.

Transparency

Most of the publicly accessible records that the government has are not readily available unless FOILed. The transparency argument of Open Data could be taken to a completely new level of depth and thoroughness if information at all levels of government was made readily available digitally as immediately as it was generated. Law enforcement records, public meetings, political records, judicial records, finance records, and any other operation of government that can be publicly audited by its people should be digitally available to the public from the moment it is entered into a government system.

Private Sector Data

Companies such as Uber and Airbnb have come to collect immense amounts of data on transportation and real estate that have historically fallen under regulated jurisdiction. Decisions should be reached with private companies to allow governments to access as much data as is necessary to ensure proper regulation of these utilities. This data should in turn be added to the public record along with official government data on these utilities.

Opportunities

Analytic Technologies

Policy-making should be actively informed by the nature of a constituency. Data-driven decision making is much hyped, but making it a reality requires software that easily and quickly gives decision-makers the information they need. From the city to the federal level, governments should have dashboards that summarize information on all aspects of citizens’ lives. These dashboards can contain information about traffic, pollution, crime, utilities, health, finance, education, and more. Lots of this data already exists within governments, and surely there exist some dashboards that analyze and visualize these properties individually, but to combine all available data on the population of a city can give significantly more insight into a decision than any one of these datasets alone.

Predictive Technologies

Governments have data going far back into history. Cities like New York have logged every service request for years, and that data is readily available digitally. Using the right statistical analysis on periodic data like heating requests, cities can start to predict which buildings might be at risk for heating violations in the winter, and can address such issues before they happen. The same can be applied to pot holes, graffiti, pollution issues and essentially any city-wide phenomena that might occur regularly. More precise preventative measures can be taken with more confidence, and eventually, the 311 call itself can be ruled out entirely.

Future Outlook

These ideas have the potential to radically change the way we engage with our cities and our politics. We can make decisions based unambiguously on what is happening in the world, and we can refine those decisions based on measured changes in the world over time. A population can know exactly if its citizens are getting healthier, safer, and smarter, and how to aid in these pursuits. Areas of governance that need more attention and potential approaches will become increasingly obvious as more information is combined and analyzed in meaningful ways. Decisions and their outcomes can be made with more confidence based on a more rigorous process. By making the most of Open Data, we can go beyond interesting information and begin to drive political action that directly benefits our cities, states, and nation.

A Note on Privacy

All of the ideas presented above have serious implications for the privacy of individuals and populations. These ideas have only considered the best-case uses of data in our society. Whether a government is analyzing granular data or data on a population in bulk, care must be taken to respect the privacy of its citizens. There is ongoing dialogue about how to balance data collection and privacy, and it is essential that governments and citizens take part in this dialogue as new technologies are developed and our societies become more data-driven.

Science on Supercomputers

A slice of the universe, created on Stampede.
Our simulations are far too computationally intensive to run on normal computers, so they’re run on the Stampede supercomputer at the Texas Advanced Computing Center. Stampede is a massive cluster of computers. It’s made up of 6400 nodes, and each node has multiple processors and 32GB to 1TB of RAM. The total system has 270TB of RAM and 14PB of storage. It’s hard to put these numbers into terms we can compare to our laptops, but essentially, this is enough computing power to simulate the universe with.
Stampede
Sometimes people ask if I use the telescope on top of Pupin, and when I answer “No,” they wonder what on earth I’m doing with my time. Mostly I write code and run scripts. This sort of astrophysics sounds unglamorous, but it amazes me. All I need is my computer and an internet connection, and I have the real universe and any theoretical universe I can dream up at my fingertips. Computers and the internet have completely changed the way we do science, and Stampede is just one reminder of the capability and potential of these new scientific tools.

The Importance of Data Visualization in Astronomy

It is difficult to understate the importance of data visualization in astronomy and astrophysics. Behind every great discovery, there is some simple visualization of the complex data that makes the science behind it seem obvious. As good at computers are becoming at making fits and finding patterns, the human eye and mind are still unparalleled when it comes to detecting interesting patterns in data to reach new conclusions. Here are a few of my favorite visualizations that simply illustrate complex concepts.

Large Scale Structure

As we’ve mapped increasing portions of the known universe, we’ve discovered astounding structures on the largest scales. Visualizing this structure in 2D or 3D maps gives us an intuitive grasp of the arrangement of galaxies within the universe and the forces the creation of that structure.

Galaxy filaments

Sloan Digital Sky Survey
The Sloan Digital Sky Survey is a massive scientific undertaking to map objects of the known universe. Hundreds of millions of objects have been observed going back billions of years. It may seem overwhelming to even begin processing this data, but a simple map of the objects in the sky provides immediate insight into the large scale structure of our universe. We find that galaxies are bound by gravity to form massive filaments, and that these filaments must contain mass beyond what we can see (in the form of dark matter) to form these web-like structures.

Fingers of God

Fingers of God
If you plot galaxies to observe large-scale structure, a peculiar pattern emerges. The structures seem to point inward and outward from our position in the universe. This violates the Cosmological Principle, which states that no position in the universe should be favored over any other. So why do these filaments seem to point at us? The cause of these “Fingers of God” is an observational effect called redshift-space distortion. The galaxies are moving due to larger gravitational forces of their cluster, as well as the expansion of the universe, so their light seems to be accelerated towards or away from us. Correcting for this effect gives us the more random filaments we see above.

Expansion of the Universe

Hubble's Law
In 1929, Edwin Hubble published a simple yet revolutionary plot. He plotted the distance of galaxies from us, and the velocities at which they moved toward or away from us. What he found was that the farther away a galaxy was, the faster it moved away from us. This could not be the case in what was thought to be a static universe. Hubble’s Law came to prove that our universe was in fact expanding.

Galaxy Rotation Curves

Rotation Curve of NGC3198
When we plot the rotational velocity of galaxies, we expect the rotational velocity to fall off as the radius increases based on the mass we can observe. As we get further away from the center and into less-dense regions, the matter should lose angular momentum and rotate slower. However, plotting rotational velocity curves reveals something peculiar — the rotational velocity remains constant regardless of radius as you leave the center. This means there must be matter we aren’t seeing: dark matter.

Our own research

Density Plot
Visualization has proved important in our own research as well. Simple sanity checks on the large-scale structure of our simulations helps us make sure our simulations are running properly. Plots of different parameters show simple relationships that arise from the physics of our simulations.