Augmented: Planners in an Era of Generative AI

The rapid development of generative AI, which focuses on creating new content such as images or text, is challenging preconceptions of computers' capabilities.

Generative AI is starting to power political campaigns, redesign streets, aid public communications, disrupt education, synthesize video and audio, and integrate with business software (Microsoft 365, G-Suite, Adobe). OpenAI's ChatGPT has set adoption records: 1 million users after 5 days and 100 million after 2 months.

These advances have the potential to improve planning processes but raise significant concerns, and their scope and pace are thus far exceeding existing educational or regulatory processes. Large Language Models (LLMs) represent a technology of significant power whose ultimate bounds are currently uncertain.

Key AI Terms

Fine-Tuning: Adapting a pre-trained foundation model for specific tasks or domains by training on a smaller dataset. This focuses the model on specific tasks or domains and introduces important policy controls.

Foundation models: Enormous, general-purpose deep learning models that can be fine-tuned for different applications. These models often are identified as the basis for a new paradigm of large-scale training in machine learning research.

Generative AI: Advanced artificial intelligence models, such as ChatGPT, that create human-like text, images, or sounds based on input data, enabling diverse applications like conversational agents, content generation, and creative tasks.

Large Language Models (LLMs): Foundational AI models trained on massive datasets containing text from diverse sources. LLMs like those powering ChatGPT, learn patterns, grammar, syntax, context, and semantic relationships to generate coherent and contextually relevant text.

Policy Constraints: Rules or guidelines controlling the AI model's behavior during text generation, ensuring content meets specific requirements and aligns with desired goals.

Training Data Corpus: A large (typically multi-terabyte data set) of samples used to train AI models, which determines their base knowledge before fine-tuning.

Machine Learning Advances Reshape Technology Landscape

These advances are stunning. They seem sudden, but they stem from a paradigm shift: behind the scenes, machine learning has been shifting to large-scale training, using terabytes of data to build massive, general-purpose foundation models that can be specialized for different applications.

Over the past decade, resources (compute) used for top-end deep learning models have doubled every six months — a 1,000-fold increase every five years. These models are trained to build an internally consistent "world model" of sequences of text or patterns in images.

As a second step, foundation models are "fine-tuned" — sometimes with a single powerful computer, but often with significant human interaction to evaluate results. It's a critical step to enable conversational interfaces, safety, and support specific domain knowledge. How are these answers evaluated? By whom?

OpenAI, the makers of ChatGPT, have faced significant criticism about how they may have performed this process and by what means. OpenAI has obscured its methods with its most recent LLM, partly out of an abundance of caution about its potential ramifications of an open release. This has complicated existing concerns about the use of "black box" models in public contexts.

An entire open-source LLM ecosystem has rapidly arisen in response, providing interesting choices. These models tend to be smaller and more specialized, and often able to provide comparable performance for certain applications. For example, a recent study used only peer-reviewed climate research evaluated by experts (ClimateBERT). This was shown to provide significantly more accurate results than ChatGPT with climate-related queries.

Image shows how machine learning has gone through three eras of scaling. With each era, both the energy, data, and resources required to train these models have escalated rapidly. Credit: Sevilla et al, 2022. Creative Commons.

The image shows how machine learning has gone through three eras of scaling. With each era, the energy, data, and resources required to train these models have escalated rapidly. Credit: Sevilla et al, 2022. Creative Commons.

LLMs Demonstrate Emergent Behavior

With this scale and complexity, large-scale models such as LLMs have demonstrated "emergent behavior." Some have been generalized to complete tasks they were not specifically trained to do, and developer communities are very actively pushing their capabilities for task orchestration and context-aware queries. Projects include:

  • HuggingGPT — chains other models available in open-source communities to solve multistep tasks that can, for example, combine image recognition and language understanding.
  • AutoGPT — attempts to make Chat-GPT autonomous through recursive prompting and expands its utility by connecting it to the web. It can self-prompt its way to abstract goals like "conduct a literature review on sea level rise and roadway abandonment."
  • Adding context — researchers are finding methods that enable these models to provide more accurate or contextually-aware responses by using external knowledge stores that can be used to inject additional context into a query. Imagine a model that could answer questions about a specific zoning code, plan, or arbitrary PDF.

LLMs are already affecting planners' work. During "A Planning Chat with ChatGPT" hosted by APA's research team and the APA Technology Division, planners report uses ranging from memo editing and curation to idea generation and more. A recent international roundtable of professionals attracted planning and technologists across the world. Discussions on the potential of this technology have included creating a common shared understanding of zoning codes or regulations across regions, as demonstrated by the Urban Institute and the Cornell Legal Constructs Lab. Beyond applications, many are aware of the risks of "stochastic parrots" ranging from systemic bias to environmental costs. APA has published several planning-specific publications, including a PAS QuickNotes on ChatGPT, a PAS Memo on "Artificial Intelligence and Planning Practice", and AI in Planning: Opportunities and Challenges and How to Prepare.

Based on these efforts and others, here's our initial SWOT analysis.

Generative AI and Planning SWOT Analysis Chart: Strengths include accelerate tool development, boost planner productivity, generalizes across tasks, expands on ideas and or narratives, research assistance, rapid draft visualizations, advanced editing, scale analysis from few examples and document search and synthesis. Weaknesses include encoded bias and framing problems, unpredictable and incorrect outputs, undecided legality such as copyright, privacy such as model leaks and training, ethics of authorship, misinformation risk, black-box complexity, exploitation in AI research, environmental footprint models, and limited access beyond reach. Opportunities include models tuned to planning needs, opensource and affordable models, scenario ideation and generation, synthesize public feedback, rapid proposal rendering, AI design evaluation, interactive community engagement tools, fuse data of existing conditions with data of tomorrow. Threats include skill atrophy risk, insufficient regulation, unaccountable process risk, magnify power asymmetries, labor market disruption, exacerbated inequities, cybersecurity and mass information campaigns, and value-lock meaning static data versus social change. Credit: David Wasserman, AICP, and Michael Flaxman, PhD.

Generative AI and Planning SWOT analysis. Credit: David Wasserman, AICP, and Michael Flaxman, PhD.

AI in Planning: Promise and Peril

Using AI models in planning practice offers great promise and peril — and risks disrupting community economies. Reports from joint U.S. and EU research, the OECD, and Goldman Sachs explore the economic impacts of AI, noting that AI trends differ from previous automation eras, which reduced demand for routine middle-wage jobs but increased it for non-routine low and high-wage jobs. They think generative AI could replace a quarter of existing work, but this impact is likely to vary by sector and new jobs are likely to arise to replace them. This automation could boost labor productivity by 1.5 percent over a 10-year adoption period.

Who benefits from such gains in productivity is unstated, but the risks of increasing inequities are present. The OECD goes as far as to suggest governments work with stakeholders to "prepare for the transformation of the world of work and society."

This is a rapidly evolving topic. Everything including community engagement, how codes and policies are presented, what a plan is, and the economies of our communities could change. Near-term steps for planners:

  1. Procure responsibly: Ask vendors how they address bias. Familiarize yourself with data sheets for training data and emerging algorithmic auditing tools and open-source alternatives. Demand accountability and transparency by providers, and start first with solutions that address existing community needs.
  2. Advocate for appropriate regulation: The White House's proposed AI Bill of Rights addresses privacy, algorithmic discrimination, and more. Initiating broader conversations is crucial to tackle accountability, transparency, and power imbalances.
  3. Plan with foresight: Planning for the future requires understanding disruptive trends and preparing to address them. Technology action plans and scenario planning already support decision-making under uncertainty.
  4. Plan ethically: Many aspects of the AICP Code of Ethics require planners to think before they naively integrate generative AI into workflows. We must not hide beyond models when our rules of conduct require us to provide "accurate information on planning issues." The APA Technology Division published an open letter to planners to consider the applications of generative AI in ethical practice.
  5. Apply the reversibility principle: To maintain urban resilience, initial applications of AI should be made in investments whose impacts are reversible. Precautions should be taken in applications that have the potential for harm or integrations are planned for important civic systems.

Much has changed, but much remains the same. We can focus on a humane vision of the future. We can plan for the needs of the communities we serve.

 

Top image: Urbanist Grimoire — created with DreamStudio.


About the AUthors
David Wasserman, AICP, is the civic analytics practice leader at Alta Planning + Design. His work lies at the intersection of urban informatics, 3D visualization, geospatial analytics, and visual storytelling. His current areas of focus are enabling data-informed scenario planning, incorporating civic data science into planning projects with web delivery and computer vision-derived datasets, and generating accessibility metrics that can identify the possible benefits of projects and who they go to.
Michael Flaxman, Ph.D., is the spatial data science practice lead at Heavy.ai. After 20 years of working within the domain of spatial environmental planning, he now actively works to develop the next generation of geospatial computing technologies at Heavy.ai. His main goal is to continue to develop spatial scenario planning tools, ultimately bringing the benefits of sustainable environmental planning to a much wider global audience.

May 9, 2023

By David Wasserman, AICP, Michael Flaxman