By Philipp Baaden, Priscila Ferri, and John P. Nelson
In November 2023, 31 early-career researchers from Europe and elsewhere gathered in Manchester, England, for the week-long Artificial Intelligence for Science, Technology, Innovation and Policy Winter School (#AI4STIP). With Eu-SPRI sponsorship, #AI4STIP brought these researchers together to delve into the intricate interplay of AI, scientific progress, ethical research, and policy shaping. Hosted by the Manchester Institute for Innovation Research (MIOIR), located at the University of Manchester’s Alliance Manchester Business School, the event offered an immersive program on the governance, ethics, scientific implications, and research applications of artificial intelligence.
To capture the collective wisdom and different insights gained during the event, we asked the participants to provide us with feedback on the most important learning they had gained while at #AI4STIP. The breadth of responses mirrored the depth and diversity of the program’s content as well as the diversity of research backgrounds. To synthesize these perspectives, we used a large language model (LLM), which offered the following summary:
“Through the AI4STIP Winter School, I’ve gained an immersive understanding of AI’s multifaceted dimensions, witnessing its potential applications and the tools available, notably large language models. This experience not only expanded my technical prowess but also heightened my awareness of AI’s ethical and societal implications, emphasizing responsible and strategic usage across diverse research domains.”
These reflections are rooted in the expertise shared across three key instructional tracks curated by leaders in the field of AI for science and innovation, showcased at #AI4STIP.
The first track, spearheaded by Philip Shapira (University of Manchester and Georgia Tech) and Justin B. Biddle (Georgia Tech), focused on ethics, societal implications, and emerging global governance structures for AI. These sessions guided attendees on the potential stakes of AI development and implementation. These ranged from job loss to intellectual property disruption to much-discussed extinction threats. The global landscape of AI investment and leadership was considered, along with the effects (or, sometimes, lack thereof) of the proliferation of AI ethics guidelines and relatively slow growth of AI regulation.
In #AI4STIP’s second track, VTT’s Arash Hajikhani and Carolyn Cole provided attendees with examples, instruction, and hands-on practice in applications of LLMs to science and innovation policy research. Attendees learned about the architecture and functioning of LLMs, available commercial and open-source tools for use of LLMs in research, and examples of use of LLMs for large-scale qualitative classification, fuzzy searches and content summaries within documents, and bibliometric trends analysis.
Throughout the week, attendees completed hands-on small-group projects using the ChatGPT API and other commercial LLM research tools to analyze and visualize documentary evidence such as journal articles and reports.
The third track, led by Barbara Ribeiro (SKEMA) and Cornelia Lawson (University of Manchester), shed light on AI’s impacts within scientific realms. Ribeiro highlighted the paradox of automating lab research leading to new “mundane knowledge work” and discussed the differential impact of AI across researchers of different levels of seniority and other demographic groups. Lawson presented on digital technologies’ and AI’s potential effects on scientific team size, collaboration, and institutional advantage, and provided preliminary findings on relationships between AI use, project initiation, and university types (among other variables).
Complementing these tracks were keynote addresses and informal evening “fireside chats” from invited speakers. Laurie Smith presented on Nesta’s experimentation with AI for social good, such as providing chatbot interfaces to support parents in dealing with health problems or designing activities for children. Alistair Nolan (OECD), advocated for adoption of AI in science as a way to increase the productivity of research and suggested policies to facilitate further development and adoption of AI for science. Elle Farrell-Kingsley (AI Curator and Dialogue Writer) provided attendees a look into the ground-level processes by which LLM developers try to make their tools safe, reliable, and comfortable—but not excessively humanlike in presentation. Parsa Ghaffari (Quantexa) offered an industry perspective on the evolution of decision-making applications from natural language processing to generative AI and LLMs. Samuel Kaski (University of Manchester and Alto University) spoke with attendees about his goals, decision-making processes, and treatment of societal consequences as a leading AI researcher.
MIOIR’s Holly Crossley and Chloe Best provided highly effective support in organizing and running the Winter School.
Despite a packed schedule, attendees bonded over meals and explored Manchester, engaging in activities such as visiting Christmas markets, touring the Old Trafford football stadium, and viewing the Manchester Science and Industry Museum.
The AI for Science, Technology, and Innovation Policy Winter School was supported by the European Forum for Studies of Policies for Research and Innovation (Eu-SPRI Forum), the Manchester Institute of Innovation Research, and the Alliance Manchester Business School. Additional support for student and faculty travel was provided by the Georgia Tech School of Public Policy and the Ivan Allen College, the Partnership for the Organization of Innovation and New Technologies (Polytechnique Montréal, Canada), and VTT Finland.
Philip Baaden is a PhD student at the Ruhr-Universität Bochum and at Fraunhofer INT interested in the evolutionary process of new interdisciplinary scientific fields. Priscila Ferri is a PhD student in science, technology and innovation policy at the MIOIR, University of Manchester, and is examining how AI shapes research and innovation practices in academic laboratories. John P. Nelson is a postdoctoral research fellow at the Georgia Institute of Technology’s School of Public Policy, focusing on ethics and societal implications of AI.