Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Saturday, July 13, 2024 · 727,414,219 Articles · 3+ Million Readers

Assessing AI readiness

Editor's note: This expert Q&A is part of our “AI is everywhere ... now what?” special project exploring the potential (and potential pitfalls) of artificial intelligence in our lives. Explore more topics and takes on the project page.

AI is starting to sweep across every industry worldwide, transforming business practices.

While these organizations and institutions are doing their best to safeguard themselves from the evolving technology, there is a risk in adopting and managing this new power.

That’s why Scott Emett, an associate professor with the W. P. Carey School of Business at Arizona State University, helped develop the first-ever enterprise risk framework for generative artificial intelligence — called the GenAI Governance Framework.

During development, Emett collaboratedEmett’s collaborators include David Wood, a BYU professor in the Marriott School of Business; Marc Eulerich, an economist with UDE; and Jason Pikos of the Connor Group. with professors from Brigham Young University and the University of Duisburg-Essen, the software company Boomi and consulting firm Connor Group to organize a list of possibilities that organizations of all sizes must confront.

Built in collaboration with over 1,000 business leaders, academics and industry contacts, the 20-page GenAI Governance Framework and accompanying GenAI Maturity Model provide a detailed guide and comprehensive methodology for organizations to assess their AI readiness, identify and manage risks associated with GenAI technologies, and move into responsible GenAI adoption.

Emett talks about about AI and business and why the governance framework is needed.

Scott Emett

Question: What inspired you and your team to create the GenAI Governance Framework, and how do you see it helping companies adopt AI safely?

Answer: The GenAI Governance Framework was inspired by companies’ excitement about the amazing potential of generative AI. Everywhere you look, organizations are dedicating significant attention and resources to GenAI and are eager to capitalize on its transformative power.

However, in their enthusiasm to adopt this innovative technology, organizations might need to pay more attention to its risks. Experience shows that companies sometimes need help to fully anticipate, identify and manage the risks of new technologies. With careful, structured thinking around GenAI risks, organizations could avoid using GenAI in ways that harm stakeholders.

So, we saw a clear need for a framework that helps companies harness the power of GenAI responsibly. Our goal was to provide an accessible guide that helps organizations navigate the complexities of GenAI adoption, ensuring they can maximize its benefits while mitigating potential risks. We hope to support companies in making safe and smart AI decisions by providing practical steps and clear guidance.

Q: How did the collaboration with business leaders, academics and industry experts shape the development of the GenAI Governance Framework?

A: The GenAI Governance Framework's development was a team effort shaped by the insights of over 1,000 business leaders, academics and industry experts. At every step, we relied on the expertise of these leading practitioners and scholars to guide us.

We had an incredible group of contributors, including GenAI specialists, internal and external auditors, regulators, audit committee members, C-suite executives and more. Their diverse backgrounds and experiences were invaluable. They helped us identify the most important risks associated with generative AI and categorize them into clear, understandable domains.

These experts also brought insights from established risk frameworks, ensuring our approach was grounded in proven methodologies. Throughout the process, they provided feedback and helped us iterate and revise the framework to make it as comprehensive and practical as possible. Their contributions were crucial — we couldn't have done it without their invaluable input and support.

Q: The framework covers five main areas. Can you explain these and why they're critical for managing AI?

A: The framework focuses on five key areas, each vital for managing AI effectively:

  1. Strategic alignment and control environment: This ensures that GenAI projects align with the organization’s goals and strategies. It sets the foundation for how AI fits into the bigger picture.
  2. Data and compliance management: Managing data responsibly and ensuring compliance with legal standards are crucial. This area covers data handling and protection to avoid legal pitfalls.
  3. Operational and technology management: This involves integrating GenAI into daily business processes and managing the technology, including IT security. It ensures that AI tools are effective and secure.
  4. Human, ethical and social considerations: AI impacts people, so it’s essential to address training, ethical use and broader social implications. This ensures AI is used fairly and responsibly.
  5. Transparency, accountability and continuous improvement: Keeping AI processes transparent and accountable is essential. This area focuses on tracking AI decisions and continuously updating practices as AI evolves.

To support organizations in these areas, the framework outlines several control considerations for each of the five domains and includes a maturity model to help assess and improve AI practices.

Q: Companies come in all shapes and sizes. How can different organizations tailor the GenAI Governance Framework to fit their needs?

A: The beauty of the GenAI Governance Framework is its flexibility. It’s designed to be adaptable, so companies of all sizes can tailor it to their unique needs. Whether a small startup or a large multinational, you can customize the framework to align with your specific goals, risk appetite and resources.

The framework provides control considerations and a maturity model that can be scaled up or down. Organizations can prioritize the most relevant domains and focus on the key risks that matter most to them. By involving the right stakeholders and regularly reviewing and updating their approach, companies can ensure that their GenAI practices are effective and sustainable.

AI is everywhere ... now what?

Artificial intelligence isn't just handy for creating images like the above — it has implications in an increasingly broad range of fields, from health to education to saving the planet.

Explore the ways in which ASU professors are thinking about and using AI in their research on our special project page.

Powered by EIN Presswire
Distribution channels: Education


EIN Presswire does not exercise editorial control over third-party content provided, uploaded, published, or distributed by users of EIN Presswire. We are a distributor, not a publisher, of 3rd party content. Such content may contain the views, opinions, statements, offers, and other material of the respective users, suppliers, participants, or authors.

Submit your press release