DeFi Daily News
Monday, June 9, 2025
Advertisement
  • Cryptocurrency
    • Bitcoin
    • Ethereum
    • Altcoins
    • DeFi-IRA
  • DeFi
    • NFT
    • Metaverse
    • Web 3
  • Finance
    • Business Finance
    • Personal Finance
  • Markets
    • Crypto Market
    • Stock Market
    • Analysis
  • Other News
    • World & US
    • Politics
    • Entertainment
    • Tech
    • Sports
    • Health
  • Videos
No Result
View All Result
DeFi Daily News
  • Cryptocurrency
    • Bitcoin
    • Ethereum
    • Altcoins
    • DeFi-IRA
  • DeFi
    • NFT
    • Metaverse
    • Web 3
  • Finance
    • Business Finance
    • Personal Finance
  • Markets
    • Crypto Market
    • Stock Market
    • Analysis
  • Other News
    • World & US
    • Politics
    • Entertainment
    • Tech
    • Sports
    • Health
  • Videos
No Result
View All Result
DeFi Daily News
No Result
View All Result
Home DeFi Web 3

rewrite this title The Black Box Problem: Why AI Needs Proof, Not Promises – Decrypt

Ismael Hishon-Rezaizadeh by Ismael Hishon-Rezaizadeh
May 18, 2025
in Web 3
0 0
0
rewrite this title The Black Box Problem: Why AI Needs Proof, Not Promises – Decrypt
0
SHARES
0
VIEWS
Share on FacebookShare on TwitterShare on Telegram
Listen to this article


rewrite this content using a minimum of 1000 words and keep HTML tags

About the author

Ismael Hishon-Rezaizadeh is the founder and CEO of Lagrange Labs, a zero-knowledge infrastructure company building verifiable computation tools for blockchain and AI systems. A former DeFi engineer and venture investor, he has led projects across cryptography, data infrastructure, and machine learning. Ismael holds a degree from McGill University and is based in Miami.

The views expressed here are his own and do not necessarily represent those of Decrypt.

When people think about artificial intelligence, they think about chatbots and large language models. Yet it’s easy to overlook that AI is becoming increasingly integrated with critical sectors in society. 

These systems don’t just recommend what to watch or buy anymore; they also diagnose illness, approve loans, detect fraud, and target threats.

As AI becomes more embedded into our everyday lives, we need to ensure it acts in our best interest. We need to make sure its outputs are provable.

Most AI systems operate in a black box, where we often have no way of knowing how they arrive at a decision or whether they’re acting as intended. 

It’s a lack of transparency that’s baked into how they work and makes it nearly impossible to audit or question AI decisions after the fact.

For certain applications, this is good enough. But in high-stakes sectors like healthcare, finance, and law enforcement, this opacity poses serious risks. 

AI models may unknowingly encode bias, manipulate outcomes, or behave in ways that conflict with legal or ethical norms. Without a verifiable trail, users are left guessing whether a decision was fair, valid, or even safe.

These concerns become existential when coupled with the fact that AI capabilities continue to grow exponentially. 

There is a broad consensus in the field that developing an Artificial Superintelligence (ASI) is inevitable.

Sooner or later, we will have an AI that surpasses human intelligence across all domains, from scientific reasoning to strategic planning, to creativity, and even emotional intelligence. 

Questioning rapid advances 

LLMs are already showing rapid gains in generalization and task autonomy. 

If a superintelligent system acts in ways humans can’t predict or understand, how do we ensure it aligns with our values? What happens if it interprets a command differently or pursues a goal with unintended consequences? What happens if it goes rogue?

Scenarios where such a thing could threaten humanity are apparent even to AI advocates. 

Geoffrey Hinton, a pioneer of deep learning, warns of AI systems capable of civilization-level cyberattacks or mass manipulation. Biosecurity experts fear AI-augmented labs could develop pathogens beyond human control. 

And Anduril founder Palmer Luckey has claimed that its Lattice AI system can jam, hack, or spoof military targets in seconds, making autonomous warfare an imminent reality.With so many possible scenarios, how will we ensure that an ASI doesn’t wipe us all out?

The imperative for transparent AI

The short answer to all of these questions is verifiability. 

Relying on promises from opaque models is no longer acceptable for their integration into critical infrastructure, much less at the scale of ASI. We need guarantees. We need proof.

There’s a growing consensus in policy and research communities that technical transparency measures are needed for AI. 

Regulatory discussions often mention audit trails for AI decisions. For example, the US NIST and EU AI Act have highlighted the importance of AI systems being “traceable” and “understandable.”

Luckily, AI research and development doesn’t happen in a vacuum. There have been important breakthroughs in other fields like advanced cryptography that can be applied to AI and make sure we keep today’s systems—and eventually an ASI system—in check and aligned with human interests.

The most relevant of these right now is zero-knowledge proofs. ZKPs offer a novel way to achieve traceability that is immediately applicable to AI systems.

In fact, ZKPs can embed this traceability into AI models from the ground up. More than just logging what an AI did, which could be tampered with, they can generate an immutable proof of what happened.Using zkML libraries, specifically, we can combine zero-knowledge proofs and machine learning that verify all the computations produced on these models.

In concrete terms, we can use zkML libraries to verify that an AI model was used correctly, that it ran the expected computations, and that its output followed specified logic—all without exposing internal model weights or sensitive data. 

The black box

This effectively takes AI out of a black box and lets us know exactly where it stands and how it got there. More importantly, it keeps humans in the loop.

AI development needs to be open, decentralized, and verifiable, and zkML needs to achieve this. 

This needs to happen today to maintain control over AI tomorrow. We need to make sure that human interests are protected from day one by being able to guarantee that AI is operating as we expect it to before it becomes autonomous.

ZkML isn’t just about stopping malicious ASI, however. 

In the short term, it’s about ensuring that we can trust AI with the automation of sensitive processes like loans, diagnoses, and policing because we have proof that it operates transparently and equitably. 

ZkML libraries can give us reasons to trust AI if they’re used at scale.

As helpful as having more powerful models may be, the next step in AI development is to guarantee that they’re learning and evolving correctly. 

The widespread use of effective and scalable zkML will soon be a crucial component in the AI race and the eventual creation of an ASI.

The path to Artificial Superintelligence cannot be paved with guesswork. As AI systems become more capable and integrated into critical domains, proving what they do—and how they do it—will be essential. 

Verifiability must move from a research concept to a design principle. With tools like zkML, we now have a viable path to embed transparency, security, and accountability into the foundations of AI. 

The question is no longer whether we can prove what AI does, but whether we choose to.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

and include conclusion section that’s entertaining to read. do not include the title. Add a hyperlink to this website http://defi-daily.com and label it “DeFi Daily News” for more trending news articles like this



Source link

Tags: BlackBoxDecryptProblemPromisesProofrewritetitle
ShareTweetShare
Previous Post

Cryptocurrency Explained: 101 Beginner’s Guide For 2025!!

Next Post

Husband Lost $270,000 Of Our 401(k) In A Scam

Next Post
Husband Lost 0,000 Of Our 401(k) In A Scam

Husband Lost $270,000 Of Our 401(k) In A Scam

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

No Result
View All Result
  • Trending
  • Comments
  • Latest
Protecting Yourself from Scams by Third-Party Sellers – NerdWallet

Protecting Yourself from Scams by Third-Party Sellers – NerdWallet

July 16, 2024
Tech companies are interested in nuclear power, but some utilities are blocking their progress.

Tech companies are interested in nuclear power, but some utilities are blocking their progress.

August 10, 2024
rewrite this title All 20 Premier League clubs ranked by their 2024/25 wage bill

rewrite this title All 20 Premier League clubs ranked by their 2024/25 wage bill

February 8, 2025
rewrite this title with good SEO Michael Saylor Explains Why Microsoft Should Buy Bitcoin

rewrite this title with good SEO Michael Saylor Explains Why Microsoft Should Buy Bitcoin

May 6, 2025
Top 5 Superior Ethereum Faucets to Earn Free ETH in 2024

Top 5 Superior Ethereum Faucets to Earn Free ETH in 2024

July 16, 2024
Crypto Analyst Reveals Six ‘Super-Cycle’ Tokens Set to Surge by 1000x in Value

Crypto Analyst Reveals Six ‘Super-Cycle’ Tokens Set to Surge by 1000x in Value

August 16, 2024
rewrite this title Today's NYT Connections: Sports Edition Hints, Answers for June 9 #259

rewrite this title Today's NYT Connections: Sports Edition Hints, Answers for June 9 #259

June 8, 2025
rewrite this title Deadspin | Angels obtain LaMonte Wade Jr. from Giants

rewrite this title Deadspin | Angels obtain LaMonte Wade Jr. from Giants

June 8, 2025
rewrite this title 2025 LIV Golf Virginia prize money payout: How much did each golfer earn from the  million purse?

rewrite this title 2025 LIV Golf Virginia prize money payout: How much did each golfer earn from the $20 million purse?

June 8, 2025
rewrite this title Coco Gauff Makes History With French Open Win As Reactions Flood Social Media | Celebrity Insider

rewrite this title Coco Gauff Makes History With French Open Win As Reactions Flood Social Media | Celebrity Insider

June 8, 2025
rewrite this title This Week in Crypto Games: Bonk ‘Kill-to-Earn’ Solana Launch, ‘FIFA Rivals’ Nears Release – Decrypt

rewrite this title This Week in Crypto Games: Bonk ‘Kill-to-Earn’ Solana Launch, ‘FIFA Rivals’ Nears Release – Decrypt

June 8, 2025
rewrite this title with good SEO Ethereum Price Performance Could Hinge On This Binance Metric — Here’s Why

rewrite this title with good SEO Ethereum Price Performance Could Hinge On This Binance Metric — Here’s Why

June 8, 2025
DeFi Daily

Stay updated with DeFi Daily, your trusted source for the latest news, insights, and analysis in finance and cryptocurrency. Explore breaking news, expert analysis, market data, and educational resources to navigate the world of decentralized finance.

  • About Us
  • Blogs
  • DeFi-IRA | Learn More.
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2024 Defi Daily.
Defi Daily is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Cryptocurrency
    • Bitcoin
    • Ethereum
    • Altcoins
    • DeFi-IRA
  • DeFi
    • NFT
    • Metaverse
    • Web 3
  • Finance
    • Business Finance
    • Personal Finance
  • Markets
    • Crypto Market
    • Stock Market
    • Analysis
  • Other News
    • World & US
    • Politics
    • Entertainment
    • Tech
    • Sports
    • Health
  • Videos

Copyright © 2024 Defi Daily.
Defi Daily is not responsible for the content of external sites.