Oregon Values and Beliefs Center
Emergency Messages as of 3:01 pm, Tue. Jul. 23
No information currently posted.
Subscribe to receive FlashAlert messages from Oregon Values and Beliefs Center.
Primary email address for a new account:

And/or follow our FlashAlerts via Twitter

About FlashAlert on Twitter:

FlashAlert utilizes the free service Twitter to distribute emergency text messages. While you are welcome to register your cell phone text message address directly into the FlashAlert system, we recommend that you simply "follow" the FlashAlert account for Oregon Values and Beliefs Center by clicking on the link below and logging in to (or creating) your free Twitter account. Twitter sends messages out exceptionally fast thanks to arrangements they have made with the cell phone companies.

Click here to add Oregon Values and Beliefs Center to your Twitter account or create one.


Hide this Message

Manage my existing Subscription

News Release
Survey of Oregonians: Artificial Intelligence - 07/09/24






Amaury Vogel, Executive Director:

  • “While some Oregonians note potential benefits of AI, many of us feel like we’re not quite up to speed on how to really tap into its potential.”
  • “Oregonians are hopeful about AI’s potential to advance research and medicine, but they’re worried about negative impacts on education, jobs, politics, and art. They’re concerned enough about the impact on jobs, they want to make sure people who lose their jobs due to advances in AI receive unemployment benefits.”
  • When discussing necessary measures in response to AI development, seven out of ten Oregonians support incentivizing technology that gives low- and middle-income residents more affordable access to necessities, like food, housing, and utilities. Oregonians also generally support international cooperation with allies to try to prevent AI being used for weaponry and cyberwarfare.” 
  • “When it comes to making decisions about artificial intelligence, the scientific community is seen as the most trustworthy, but even ordinary people are seen as more trustworthy than the government.” 



Comment on OVBC AI Survey Findings and Generative AI

Rebekah Hanley


Generative artificial intelligence (“AI”) is older than many realize; indeed, OpenAI introduced its first GPT (short for “Generative Pre-trained Transformer”) in 2018.  Still, the November 2022 public launch of ChatGPT 3.5 brought widespread access to the tool.  That access shined a bright light on the power, possibilities, and perils of large language models (“LLMs”), AI tools capable of quickly generating polished prose that seems like it was carefully crafted by humans.  As a legal writing professor at the University of Oregon, I have been contemplating the profound implications of LLMs’ fluency, range, and speed since the first time I saw one “write.”  And, like other educators (and students and parents); private-sector leaders and workers; and government officials, I am laboring to stay current in a rapidly shifting landscape, to adjust longstanding policies and practices, and to plan for the future of writing in an AI-enhanced world.

The launch of ChatGPT 3.5, with its accompanying media coverage of related technology, impacted Oregonians’ views about all AI.  Oregon Values and Beliefs Center’s August and December 2023 statewide studies captured the AI-related hopes, fears, and concerns of Oregonians; those findings may help the Oregon legislature consider how AI affects the state’s economy and social well-being.  The surveys show that Oregonians’ greatest concerns about AI center on control, safety, security, and malicious use, with almost three of every four respondents worrying about unintended, unmanageable consequences and exploitation for destructive purposes.  In the short term, Oregonians view the social, political, and economic effects of AI as materially more threatening to humanity than climate change, though in the long term they regard those two types of threats as about equal.  On balance, over a quarter of those surveyed think that AI’s benefits do not outweigh its risks in the short and long term; an additional eleven percent of respondents believe that even AI’s short-term benefits do not outweigh its risks.

But Oregonians’ opinions on these matters are to some extent uninformed.  As of August 2023, only thirty percent of respondents reported having personally experienced ChatGPT, which had been freely available to the public for over eight months.  Almost as many respondents did not know that ChatGPT was an example of AI; many respondents did not realize that AI has long been integrated into numerous commonly used digital platforms and tools.

The survey results reflect a sense of urgency around responding—in some way—to the shifting landscape.  In August 2023, almost sixty percent of respondents wanted both the federal and state governments to issue regulations ensuring that AI research and development serves the public interest, though fewer than twenty percent of respondents trusted government entities to make AI-related decisions.  Perhaps surprisingly, respondents trusted AI creators and marketers to self-regulate more than they trusted governments to regulate AI.  Four months later, that had shifted: The suggestion that corporations developing AI products should self-regulate enjoyed half as much strong support as the call for government regulation.  And while two-thirds of Oregonians believed that state officials lack the necessary expertise to regulate AI, over a third thought the state should move forward with regulation regardless of its expertise deficiency.

Oregonians expressed mixed opinions about how the state should respond to generative AI’s utility and risks.  Some Oregonians see opportunity and hope that the state will capitalize on it, becoming a leader in the sector by recruiting AI companies and research organizations.  At the same time, one in five Oregonians suggests that the state ban the use of new AI models by government employees.  With its diverse positions, Oregon mirrors the nation: the Pew Research Center has recently documented divergent views and uncertainty among teachers (about AI tools’ benefits and harms to K-12 education) and among the general public (about whether AI tools should cite source materials).


Overall, Oregonians recognize that AI is here to stay. Almost two-thirds of those surveyed agreed that K-12 AI literacy programs are necessary, likely in part to prepare students for the jobs of the future.  This vision of the future triggers financial concern: Almost three-fourths of those surveyed agreed that unemployment benefits should be available for workers whose jobs become obsolete due to AI.  These are logical reactions to the pace of generative AI improvement: Corporations are investing aggressively in this technology, and new products are being tailored for specific contexts and to minimize known risks and weaknesses. While simply ignoring the technology is not a viable strategy, specifically how educators and others should react to generative AI’s growth raises open—and challenging—questions; finding answers will require creativity and cautious, but bold, experimentation.



Professor Rebekah Hanley has been a faculty member at the University of Oregon School of Law since 2004.  She teaches foundational lawyering skills to first-year law students; she also teaches professional responsibility and advanced legal writing courses.  As Oregon Law’s current Galen Scholar in Legal Writing, Professor Hanley is studying generative AI and its implications for law school teaching and the practice of law.


View more news releases from Oregon Values and Beliefs Center.