AI: The Good, the Bad, the Sketchy and the Lifesaving
10 min read
With its already vast applications for increasing sustainability, artificial
intelligence was naturally a hot
topic throughout the week at SB’24 San
Diego. But
discussions also addressed its darker side — and, along with diving into its
role in creating healthier products and helping save one of our most vital
natural resources, a variety of experts explored how to rein in the technology’s
environmental impacts and assuage the fears of reticent consumers.
A faster, more efficient path to safer chemicals
Image credit: cottonbro
studio
As regulatory pressures mount around eliminating
PFAS
and other harmful substances, and consumers increasingly prioritize
sustainability, companies are having to pay closer attention to the chemicals
they
use,
and balance meeting regulatory requirements with maintaining product quality and
performance. However, reformulating products to meet these demands presents
significant challenges — particularly, when it comes to identifying at-risk
ingredients and finding suitable replacements.
Identifying which ingredients pose risks to human health and the environment can
be a complex task, especially given the vast number of chemicals used in
industry. Manufacturers also face a race against time to stay ahead of evolving
regulations, as reformulating products typically takes one to three years.
Adding to the complexity is the fact that a replacement can often be as risky as
the original and will be regulated in the future- forcing manufacturers to
repeat the process.
In response to these challenges, NobleAI — which
demonstrated its product offering in the Innovation Expo at SB’24 San Diego — offers a
risk-assessment and ingredient-replacement platform that helps companies quickly
and efficiently navigate the complexities of product reformulation. The service
is designed to identify and prioritize risks, find suitable ingredient
replacements, and optimize formulations to reduce the time to market while
ensuring compliance with regulatory demands.
NobleAI’s customizable risk-assessment tool leverages pre-trained models to
evaluate factors including acute toxicity or carcinogenic potential, and uses
curated toxicological and environmental data from sources including the US
EPA and the National Cancer Institute for accurate risk assessments. This
approach allows manufacturers to make science-based
predictions about potential
risks and avoid “regrettable substitutions” — in which a replacement ingredient
later proves to be equally harmful.
The process begins when manufacturers plug product names, chemical recipes and
CAS numbers into NobleAI’s
risk-assessment tool — which breaks down formulations into graphs and charts
that classify ingredients by their risk levels. The system allows users to click
on individual ingredients to explore suitable alternatives, offering a detailed
hazard map for the entire product portfolio.
Once risks are identified, NobleAI’s ingredient-replacement service steps in to
find lower-risk substitutes. The AI-driven system optimizes for specific
properties, ensuring that new formulations maintain the desired product
performance while reducing health and environmental risks. Users are provided
with a formulation comparison highlighting changes from the old to the new
product, making the process transparent and efficient.
With AI’s help, manufacturers can now proactively predict future regulatory
changes, accelerate their time to market and turn sustainability into a
competitive advantage. By optimizing products for performance and environmental
safety, manufacturers can meet their sustainability goals while maintaining the
success of their existing product lines. This proactive approach positions
sustainability as a priority, enabling companies to stay ahead in a rapidly
changing market.
Could AI’s environmental drawbacks outweigh its benefits?
Image credit: OECD
The benefits of artificial intelligence (AI) for business are well documented:
It can automate repetitive tasks such as data entry, customer service and
inventory management — freeing up employees to focus on higher-value work. It
can process vast amounts of data in real time — enabling companies to make more
informed decisions, optimize supply chains or predict market trends. It can be
used to personalize customer experiences, analyzing behavior and preferences to
serve up tailored recommendations.
It can drive innovation, too — helping businesses develop new products, optimize
existing processes and enter new markets.
What is less known, talked about or quantified is AI’s impact on the
planet.
As John Frey, Chief Technologist with
Hewlett Packard Enterprise (HPE),
pointed out during a Tuesday afternoon panel: “Sustainability professionals have
little experience with enterprise technology,” and that needs to change if the
negative impacts of tech are to be addressed.
Frey gave a potted history of AI — a technology that has been around for more
than 60 years and is today found in everything from detecting credit card fraud
to helping clinicians make more robust diagnoses. Generative AI, the most used
technology, is where large-language models are used to serve up graphic or
text-based information. But all this information comes with a cost, both in
terms of the energy used to generate it and the water used in the process of
cooling the data centers doing all the work.
Frey’s slideshow made for alarming reading. According to the latest Gartner
Hype Cycle, by 2026, 75 percent
of businesses will use generative AI to create synthetic data; in 2023, that
figure was less than 5 percent. AI use is set to jump to 7.3 percent of global
data center use next year — up from 2.3 percent this year. Worryingly, the
required power to train AI models is doubling every 3-4 months. And the net new
peak load for AI is expected to be 2,000MW by 2025 — with each new query typed
into engines such as
ChatGPT
using 0.5Wh of power.
Then, there’s the water impact. Water is used in cooling, power generation and
the chip-fabrication stage. Each question posed to a generative AI machine uses
16oz of water, and the whole process is driving double-digit growth in water use
for model developers and hosts.
Companies including HPE have been working hard to help brands tackle these
environmental impacts head on for many years — encouraging the adoption of a
holistic approach that examines not just data and software efficiency but also
equipment, energy and resource efficiency.
“Your company stores loads of data,” Frey said. “Only a third of it is ever used
and most of it is not valuable after about a week. One option is to put
constraints on the amount of power and equipment needed for your AI needs. Be
the change.”
Kyle Ward, Interim Director for
Decarbonization and Energy Transition at Anthesis
Group, agreed and offered an alternative
framework — known as the 4M framework — for brands looking to address their
AI impacts.
“Use AI only when it is needed and select an efficient AI model architecture and
process,” he said. “Also, be sure to incorporate sustainability standards into
the development of digital services and products.”
Both speakers admitted that AI is an evolving topic, and not one that is easy to
address. Frey urged brands to keep minimizing impact, even after those impacts
have been reduced as much as possible. “Efficiency is the first fuel,” he said,
pointing to emerging options such as reusing or selling the heat generated by
data centers.
For all its good, AI has an emissions problem that — without action — could
spiral out of control. The key takeaway from this session: It’s time to take it
seriously.
Check out more highlights from throughout the week at SB’24 San Diego!
Clear communication, transparency can help alleviate consumer distrust of AI
Image credit:
Amazon
Consumer hesitation and distrust toward AI and other emerging technologies are
significant challenges in today’s tech-driven world. The root of these concerns
often lies in the complex, confusing language used to describe innovations and a
lack of transparency around their ethical implications.
A Wednesday afternoon panel discussed how brands can overcome these barriers by
rethinking how they communicate about the technology with the public — using
more accessible and human-centered language while addressing concerns about
privacy, ethics and societal impacts.
Chris Konya — Co-Chief Strategy
Officer at strategy and design consultancy SYLVAIN —
offered insight into the power of language in shaping perceptions, using
Amazon‘s Alexa as an example. The name “Alexa” was
chosen for its approachable and unique sound. However, introducing the device as
a household name also led to the near extinction of "Alexa" as a baby name. And
the now-common theme of AI assistants with female names such as Alexa, Siri
and Cortana raises concerns about perpetuating gender biases in
technology.
This highlights how language can empower or alienate consumers and make or break
technology.
“Behind every word, there is a small bit of narrative — a small idea in our
brains that comes and flows through society,” Konya said. “Those ideas shape the
way that we think, the beliefs we have, and shape the things we do.”
Complex technical terms such as "Generative Pre-trained Transformer"
(GPT) risk alienating everyday users — creating a sense of exclusion and
fostering mistrust. Referencing the Sapir-Whorf
hypothesis,
language shapes thought; and without clear, inclusive language, people may
struggle to fully understand or embrace AI. This leads to fear — as the
technology feels
unrelatable,
intimidating and outside their control.
To address this, companies must simplify their language and make AI more
accessible. Misha Kouzeh, founder of
social impact consulting firm Tech Makes
History, noted that 75 percent of consumers
globally are concerned about the risks of AI.
“Sustainability and AI are both complex topics,” she said. “We need to think
about how we distill these topics and talk to our consumers about what AI can do
to drive humanity forward.”
Ethical concerns, particularly around privacy and data usage, contribute to the
distrust. To counteract this, communication that is transparent and
straightforward — as well as avoiding jargon and buzzwords — is crucial. For
instance, IBM‘s rebranding of AI as "augmented
intelligence" helped soften the
perception of AI as something overwhelming, making it feel more collaborative
and less threatening.
This approach is critical in consumer-facing fields, such as self-driving
vehicles or AI-driven financial tools. By emphasizing how these technologies
augment human abilities rather than replace them, companies can help users
feel more in control. Transparency about data usage and ethical standards is
also key to fostering trust. Apple, for example, has built consumer
confidence by prioritizing privacy in its AI-driven products.
Andrew McKechnie, Head of
Marketing at carbon-utilization innovator Air
Company, offered another perspective on this
challenge. By using the power of design and experiences, as well as positioning
its
innovations
within a broader narrative of sustainability, Air Company engages consumers and
potential customers who might otherwise be hesitant to embrace climate tech.
“What we try to do is communicate that the technology exists in different life
stages to our consumers and our b2b and enterprise customers,” he explained “We
want them to understand that there is tech available today that can be a
potential path to
decarbonization.”
Ultimately, alleviating consumer hesitation and distrust of AI and other
emerging technologies requires a shift in how companies communicate. By focusing
on human-centered language, transparency and ethical clarity, brands can build
trust and empower consumers to engage with the benefits of these innovations.
Storytelling, rather than technical jargon, should be at the heart of this
communication strategy — bridging the gap between complex technology and
everyday user experience. Through thoughtful, inclusive communication, tech
companies can drive greater acceptance and trust in the technologies shaping the
future.
AI, the Colorado River and a water-positive movement
The Colorado River flows through the Grand Canyon | Image credit: David
Ilecio
Global water scarcity is no longer just an environmental issue — it’s a pressing
social problem and a business continuity
challenge.
How do companies with ambitious growth strategies secure the water they need to
continue growing in a world where paying more for water will not work? At a
certain point, having all the money in the world won’t matter.
This was the message given loud and clear by FIDO Tech
founder and CEO Victoria
Edwards during a Wednesday
afternoon session about how her company is working to safeguard one of our most
precious resources.
The numbers backing up her message were alarming, to say the least. By 2030, the
World Bank
projects water
demand will outstrip supply by 40 percent. Since 2010, there has been a 500
percent increase in conflict because of restricted access to water.
Exacerbating the problem is leakage. Up to 60 percent of all water is lost
through leaky pipes. In Chile, 70 percent of water is being lost; and of the
remaining 30 percent available for communities to use, 65 percent is being used
to mine lithium.
Edwards and her team have come up with a solution: an AI-assisted
leak-detection technology that ‘listens’ to leaks
and can pinpoint exactly where utilities need to dig to access and repair the
leaky pipe.
“Leaks make a distinctive noise in the pipes, like an F Sharp. But they never
show aboveground,” she said. “AI can tell where there’s a leak, how big it is
and whether it makes commercial sense to dig and repair it.
“Yes, AI — particularly, generative AI — has an environmental impact; but it is
part of the solution, too.”
Recognizing that most water utilities are under-resourced, and often faced with
rigid operational practices and a workforce resistant to change, Edwards needed
to find an alternative model to fund the much-needed leakage detection and
repair. And as she explained, she got lucky: She found
Microsoft, a company which had pledged to be water
positive by 2020 and needed a transparent, auditable way of making a difference.
The two organizations decided to kickstart their partnership in London,
working with Thames
Water
and creating a “catalytic community” actively looking for leaks across the
enormous water network.
“We helped Thames save 4,730,348,247 gallons of water — fully audited by
blockchain technology. Fully transparent,” Edwards stated.
Now, the team has taken its approach to the Colorado River basin — which
supplies seven US states, supports 40 million people and a $1.4tn economy, and
is one of the world’s most over-allocated rivers.
“After London, we knew we wanted to go further, bigger and faster — to expand
and take on a big problem: the Colorado river basin. And we invited others to
join us, including tech, other corporations — including Microsoft, PepsiCo &
Meta — and utilities, to build something that has a lasting impact.”
That ‘something’ — known as Water
United
— goes beyond using AI to detect leaks. It is being used to help inform decision
making by city planners, utilities and local communities. By understanding the
impact of river water extraction, partners can decide on whether to build a new
data center, for example.
“It’s about using the theory of connections and giving that to the local
community to help them plan a water-resilient future,” Edwards explained. “All
parties get what they need from the relationship. Businesses get continuity,
tech is funded for the long term, utilities get a more resilient network.”
In closing, Edwards highlighted that in the one hour she had been talking,
1,875,000,000 cubic meters of water had been lost unnecessarily: “It’s
unconscionable.”