26 Oct 2023 | 22:41 UTC

POWER OF AI: Oil sector faces dual realities of AI's opportunities, drawbacks

Highlights

Lawmakers rally for strategic AI development, protections

Stakeholders uncover policy blind spots

'Hallucinations' present hurdle to AI use by oil sector

Getting your Trinity Audio player ready...

This feature is part of a longer series that addresses AI in several important areas of the energy industry.

Also read: POWER OF AI: Oil and gas sector drills down into methane emissions data

The pervasive advancement of artificial intelligence (AI) is sparking both enthusiasm and apprehension, as the technology's potential to surpass human capabilities and infiltrate physical environments becomes increasingly tangible.

The integration of machine learning and AI in the oil sector has been a longstanding practice, but it wasn't until the introduction of cloud computing that these solutions truly gained momentum.

Oil companies are already harnessing AI to optimize production, aid with supply chain logistics, mitigate environmental impact, and expedite research and development efforts, including biofuel development.

"Machine learning allows the oil and gas industry to remove a level of human error, whether through detecting anomalies in refinery assets or by increasing the knowledge retained by energy facility managers and allowing them to make real-time decisions based on AI-gathered data," said Molly Determan, president of the Energy Workforce & Technology Council, which helps energy companies and energy workers prepare for a low-carbon future.

Lack of oversight

The oil industry has grappled with the persistent challenge of flawed data influencing AI outputs, giving weight to the principle of "garbage in, garbage out" in computer programming.

Policymakers are increasingly concerned about AI's impact on national security and global competitiveness, as well as the best way to exploit its benefits without calamity.

Gabriel Rene, CEO of cognitive computing company VERSES and co-author of a recent report on the future of global AI governance, said AI is poised to become smarter than humans, network and form systems, then expand beyond the screen into the physical world.

And despite the dysfunction on Capitol Hill, there does appear to be bipartisan consensus that some rules of the road for AI are needed.

Senators Joe Manchin, Democrat-West Virginia, and John Barrasso, Republican-Wyoming, have called for comprehensive strategies to manage the escalating influence of AI, both domestically and internationally.

Manchin expressed apprehension that AI technologies could be used to erode existing defenses, leaving nations vulnerable to sophisticated weaponization, cyberattacks and disease outbreaks.

He underscored the imperative for the Department of Energy to implement strategic planning, leveraging national labs and fostering partnerships with the private sector to enhance AI development while preserving national security interests.

Barrasso cautioned against potential political biases that could infiltrate the AI development process, urging stringent protection measures, especially when taxpayer funds are involved.

Senators Richard Blumenthal, Democrat-Connecticut, and Josh Hawley, Republican-Missouri, have put forth a legislative framework to establish guardrails for AI. That legislative includes a licensing regime for companies involved in high-risk AI development, legal accountability for harms, and the creation of an independent oversight body to enforce AI regulations. The framework aims to strike a balance between fostering innovation and providing essential safeguards against potential risks and abuses arising from AI technologies.

Blind spots

Technology experts, however, have warned of policy blind spots that existing and developing regulatory frameworks have failed to address.

Dentons managing partner Peter Stockburger, another co-author of the global AI governance report, said policymakers must address the interconnectedness of AI systems, noting a critical gap in the current regulatory focus on AI deployers and organizations.

Stockburger highlighted varying approaches such as the market-based strategy in the UK and the risk-based approach adopted by the EU and the US, each striving to address specific-use cases but potentially overlooking the broader network implications.

Stockburger advocated for a tiered regulatory approach, acknowledging the immediate need for interim measures while building standards and protocols for long-term governance. Drawing parallels to the early days of electricity, he emphasized the importance of establishing foundational standards as a precursor to more sophisticated regulatory frameworks.

Woodrow Hartzog, a Boston University School of Law professor, highlighted the limitations of existing industry-led approaches in AI governance.

He pointed out the inadequacy of partial measures such as encouraging transparency, mitigating bias and promoting ethical principles in ensuring full protection and accountability in the AI domain.

"Half measures like audits, assessments and certifications are necessary for data governance, but industry leverages procedural checks like these to dilute our laws into managerial, box-checking exercises," he said. "A checklist is no match for the staggering fortune available to those who exploit our data, our labor, and our precarity to develop and deploy AI systems. And it's no substitute from meaningful liability when AI systems harm the public."

Hartzog called for a more comprehensive regulatory approach, urging lawmakers to proactively address AI design, ensure substantive laws to limit power abuses, and resist the notion that AI development is inevitable without considering its potential societal impacts.

Also read: POWER OF AI: While automation steams ahead, commodity traders eye AI cautiously (subscriber content)

Operational hurdles

The oil industry is also running into operational hurdles as it looks to AI.

A major limitation is its tendency to produce misleading information, or "hallucinations," compounded by its reluctance to acknowledge gaps in knowledge. The inability of generative AI to discern gaps in its understanding poses serious risks, especially in safety-critical sectors such as oil and gas.

Despite the technology's undeniable potential, industry experts stressed the importance of exercising caution, particularly in contexts where high-stakes decisions are involved.

Paula Doyle, chief digital officer at Aker BP, Europe's largest independent oil and gas producer, emphasized the importance of identifying genuine value propositions over succumbing to industry hype.

She also noted the criticality of maintaining transparency and control over AI systems, especially in the context of cybersecurity and decision-making processes that directly impact safety and environmental conservation.

"One of the risks that we're seeing now is around the use of LLMs," she said.

LLMs, or large language models, are a form of generative AI trained on vast amounts of data and text to not only understand content but generate original content. Chat GPT is a prominent example.

"We need to know what's happening in those black boxes," Doyle said of LLM systems. "What the algorithms are doing, what the inputs are, what's the technology that's within them because the decisions that we make could have major impacts on people's safety or on the environment and of course on the supply of oil and gas, too. So, it's not something that we can just trust to a black box."


Editor: