Wednesday, July 24, 2024

Top 5 This Week

Related Posts

The Risks of AI: Organizational Concerns About Data Accuracy and Cybersecurity Guidance

Organizations in Australia are increasingly interested in artificial intelligence (AI), but they are also becoming more aware of the technology’s weaknesses, particularly when it comes to data accuracy. According to a report on AI trends released by Google, less than half of the surveyed business and IT leaders (44 percent) expressed full confidence in their organization’s data quality. Another 11 percent had even less confidence. This lack of confidence is further reflected in the fact that only slightly more than half of the respondents (54 percent) considered their organizations somewhat mature in terms of data governance, and only 27 percent believed their organizations were extremely or very mature in this area.

Additionally, the report highlighted that over two-thirds (69 percent) of employees admitted to bypassing their organization’s cybersecurity guidance within the past year. This finding is concerning, especially considering the increasing interest in AI. Search interest in AI reached a record high in May, with a 20 percent increase during the April-June period compared to the first quarter of the year.

The report emphasized that it is not enough to simply apply large language models (LLMs) to data. These models, which power AI chatbots, need to be grounded in good quality enterprise data to avoid creating incorrect or misleading information presented as fact. These misleading outputs, known as AI hallucinations, pose significant risks. American AI expert Susan Aaronson expressed skepticism about AI’s benefits, stating that the datasets produced by AI are generally not accurate. She referred to AI as being “so full of hallucinations.” According to Aaronson, there is currently no federal law in the US to prevent the misuse of AI, and she believes that people will exploit its vulnerabilities.

To illustrate her point, Aaronson cited the childcare benefits scandal in the Netherlands. The tax authority used a self-learning algorithm to detect signs of benefits fraud but ended up wrongly accusing approximately 26,000 parents, mostly from lower-income backgrounds, immigrants, and ethnic minorities. This led to financial hardship and even suicides among affected families. The scandal ultimately resulted in the resignation of the Dutch government in 2021. The Senate inquiry into the matter has echoed concerns about the misuse of AI, with calls for guidelines and limitations on its use.

These concerns are shared by Adobe Asia Pacific public sector strategy director John Mackenney, who emphasizes the misappropriation of image, likeness, voice, and artistic style as one of the biggest concerns raised by customers and the creative community. In response to these issues, Australia’s national AI expert advisory group is evaluating the introduction of mandatory regulations for high-risk AI deployments. The findings of the Senate inquiry are expected to be presented in September.

In conclusion, while the interest in AI is growing, organizations are increasingly concerned about the weaknesses and vulnerabilities associated with the technology. Data accuracy remains a major challenge, and there is a lack of confidence in organizations’ data quality and governance. The misuse of AI, as demonstrated by the childcare benefits scandal in the Netherlands, highlights the risks associated with incorrect or biased outputs. It is crucial for regulations and guidelines to be implemented to ensure the responsible and ethical use of AI in order to mitigate these risks and protect individuals and communities from potential harm.

Popular Articles