While almost nine in 10 business leaders agree it’s important to have clear guidelines on artifical intelligence (AI) ethics and corporate responsibility, barely a handful admit they have such guidelines, a recent survey shows.
Such findings suggest there’s confusion about what approaches need to be taken to govern AI adoption, and technology professionals need to step forward and take leadership for the safe and ethical development of their data-led initiatives.
Also: Five ways to use AI responsibly
The results are from a survey based on the views of 500 business leaders released by technology company Conversica, which says: “A resounding message emerges from the survey: a majority of respondents recognize the paramount importance of well-defined guidelines for the responsible use of AI within companies, especially those that have already embraced the technology.”
Almost three-quarters (73%) of respondents said AI guidelines are indispensable. However, just 6% have established clear ethical guidelines for AI use, and 36% indicate they might put guidelines in place during the next 12 months.
Even among companies with AI in production, one in five leaders at companies currently using AI admitted to limited or no knowledge about their organization’s AI-related policies. More than a third (36%) claimed to be only “somewhat familiar” with policy-related concerns.
Guidelines and policies for addressing responsible AI should incorporate governance, unbiased training data, bias detection, bias mitigation, transparency, accuracy, and the inclusion of human oversight, the report’s authors state.
Also: The best AI chatbots: ChatGPT and other noteworthy alternatives
About two-thirds (65%) of the executives surveyed said they already have or plan to have AI-powered services in place during the next 12 months. Leading use cases for AI include powering engagement functions, such as customer service and marketing (cited by 39%), and producing analytic insights (35%).
The survey found the top concerns about AI outputs are the accuracy of current-day data models, false information, and lack of transparency. More than three-quarters (77%) of executives expressed concern about AI generating false information.
AI providers aren’t providing enough information to help formulate guidelines, the business leaders said — especially when it comes to data security and transparency, and the creation of strong ethical policies.
Also: Today’s AI boom will amplify social problems if we don’t act now
Around two-thirds (36%) of respondents said their businesses have rules about using generative AI tools, such as Chat GPT. But 20% said their companies are giving individual employees free rein regarding the use of AI tools for the foreseeable future.
The Conversica survey shows there is a leadership gap when it comes to making responsible AI a reality. So, how can technology leaders and line-of-business professionals step up to ensure responsible AI practices are in place? Here are some of the key guidelines shared by Google’s AI team:
- One of the best video doorbells I’ve tested is not a Blink or Arlo
- The best QLED TV I’ve tested got an upgrade, and it’s $500 off at Best Buy
- The best small phones in 2024: Expert tested and reviewed
- Save up to $600 on this Sony Bravia 7 TV bundle at Amazon
- Buy a Samsung Galaxy A35 and get a pair of Buds FE for free at Amazon
The business might want to implement AI quickly, but caution must be taken to ensure the tools and their models are accurate and fair. While businesses are looking for AI to advance, the technology must deliver responsible results every time.
Artificial Intelligence
Source : https://www.zdnet.com/article/everyone-wants-responsible-ai-but-few-people-are-doing-anything-about-it/#ftag=RSSbaffb68