The algorithmic city
Every year, New York City publishes a list of the algorithmic tools its agencies use to make decisions affecting residents' rights, liberties, benefits, and safety. It's a remarkable document. The political fight over what should be on it is more remarkable still.
Local Law 35 of 2021 codified what the Mayor's Automated Decision Systems Task Force had been arguing for since 2018: that the city's algorithmic tools — predictive analytics, machine learning models, generative AI — should not be black boxes. Each year, the Office of Technology and Innovation publishes a compliance report cataloging every system that meets three tests:
- Data analysis: the tool uses ML, AI, predictive analytics, or similar advanced statistical methods.
- Decision-making: the tool assists human operators in implementing policies or making operational decisions.
- Public impact: the tool materially affects residents' rights, liberties, benefits, or safety — including how public resources get allocated.
The threshold matters. Without it, the report would be flooded with spellcheck and spreadsheet formulas. With it, the report becomes a meaningful inventory of where the city has chosen to let software steer policy. The 2024 disclosure caught a Department of Health deployment of the Burrows-Wheeler Aligner, the same algorithm used for genomic sequencing in PulseNet, to track Legionnaires' outbreaks. It also caught the Administration for Children's Services' predictive risk scores — the system that estimates which children may be at higher risk of violence based on socioeconomic correlates — and that's where the political fight starts.
The risk-score problem
Critics have argued for years that ACS's risk scores mathematically launder structural bias. The model is trained on historical data — historical reports, historical removals, historical interventions — all of which encode the priors of caseworkers, neighborhoods, and policies that disproportionately surveilled poor and non-white families. A "predictive" model trained on that data doesn't predict the future of child welfare so much as reproduce the patterns of its past. The risk scores then enter the loop as a quantitative justification — the algorithm said so — for decisions that the historical data itself was complicit in shaping.
The LL35 disclosure brings this debate into the open by making the deployment a matter of public record. Before LL35, you had to FOIL the agency to learn whether a given system was even in use. Now it's on a page on the city website, and the next round of debate is over what the city should do with that knowledge.
The GUARD Act
The City Council's response was the GUARD Act — Guaranteeing Unbiased AI Regulation and Disclosure. The bill, which passed unanimously, mandates the creation of an Office of Algorithmic Accountability. Crucially, it shifts the regulatory framework from post-facto disclosure to pre-procurement audit: AI tools acquired by the city must be subjected to discrimination audits, risk assessments, and public review before they enter active service. The disclosure list still exists; the GUARD Act adds the requirement that nothing on the list arrives there without a prior audit.
The State Attorney General's Office has convened symposiums on the unique threats posed by generative AI, signaling that the regulatory conversation will not stop at the city level. Whatever shape it takes — pre-procurement audits, mandatory bias assessments, public hearings — the foundational claim is the same: a system that affects residents' rights deserves at least the same scrutiny as the policies it implements. The fact that the system is software doesn't exempt it from accountability. If anything, it raises the bar.
What's next on the disclosure
The 2024 list runs across virtually every city agency. NYPD's gunshot-detection acoustic triangulation. The Department of Buildings' machine-learning permit-fraud detector. The Department of Sanitation's route optimization. The Department of Education's school-matching algorithm. The Mayor's Office's GenAI-assisted public-comms image generation, formally disclosed. Each entry on the list is one more thing the public didn't know was in operation before LL35 made the city say so.
The frontier of "open data" used to be open documents. Then it was open datasets. Then it was open APIs. The next horizon is the open algorithm — what the model actually does, on what data, with what incentives, to whose benefit, and to whose detriment. New York City is further along that path than most municipalities in the world. The fight over what the finished destination looks like is only just starting.