UEI's Statement on Generative AI and Large Language Models

At the Urban Equity Institute (UEI) we recognize that generative AI has many applications and is a useful tool for researchers, writers, artists, analysts, and computer scientists. But, at the UEI, we require certain guardrails in place.

UEI's Statement on Generative AI and Large Language Models

At the Urban Equity Institute (UEI) we recognize that generative AI has many applications and is a useful tool for researchers, writers, artists, analysts, and computer scientists. When appropriate, usage is encouraged, however, extensive reliance on such tools is discouraged and transparency is required.   

Generative AI refers to any work created by a platform deemed a generative AI tool. This is different from a Large Language Model since these models, while generative and considered artificial intelligence in nature, are entirely text based. Both, however, fall under the umbrella of artificial intelligence tools. 

Note that material generated by AI or LLMs includes but is not limited to: summaries, analyses, opinions, writings, images, charts, graphs, visuals, photos, artwork, and music. 

Generative AI shall not be used to write articles, columns, analyses, editorials, or provide feedback regarding UEI material. Additionally, work submitted to UEI that has been generated by an LLM or AI source will not be considered “publishable” by the partners of UEI. This includes work submitted to SCOPE or any publishable content available on UEI’s website. Authors who seek to submit work where generative AI or LLMs were used will not only be denied publication, but any future work will not be accepted as a consequence. Similarly, using generative AI or LLMs in correspondence with other UEI members through emails, texts exchanges, or other such communications methods is not only unnecessary but prohibited. 

Separately, during the research process the use of AI and LLMs to summarize content, identify research questions, aid in the research process, or otherwise contribute to the process itself is permissible, however only under these conditions: 

  1. Expectation of Transparency: authors and/or researchers who have used AI in their research should make this clear in their work in addition to discussing this with other members of UEI 
  2. Explanation of Usage: authors and/or researchers should make it clear in their work and to other members of UEI how AI and/or LLMs were implemented in their work provided it was in accordance with the guidelines aforementioned. 
  3. Credibility and Ownership: authors and/or researchers must make it clear that work generated by AI or LLMs cannot be claimed as their own work. It must be made clear what was generated by AI or LLMs and what was not. In either instance, both processes must be in accordance with existing UEI policy. 
  4. Due Diligence: generative AI and LLMs are historically unreliable, untruthful, and inaccurate. Accordingly, researchers and/or authors must fact check and verify any products, whether writing or otherwise, made by generative AI or LLMs. Should discrepancies arise, these must be noted in the final work in addition to the staff overseeing the project itself.  

Finally, any usage of generative AI or LLMs must be brought to the attention of UEI members overseeing the research or production process of the work in question. Note that these bylaws do not encompass every use case and scenario as it is applicable to generative AI and LLMs. Hence, if UEI membership/leadership deem there has been an inappropriate use of AI or LLMs by a majority vote, then the consequences will follow in accordance with those outlined in these bylaws.