Skip to content
Assert Digital Ventures
  • About
  • Publications & Media
  • Newsletter
  • Blog

The Third Magic: AI

  • January 6, 2023January 6, 2023
  • by Andy

I like this definition of science from this Noah Smith article: in essence, the ability to come up with a generalizable & simple prediction model. For example, to calculate where an artillery shell will fall, we go back to Newton’s laws of physics and its consistently predictable. However, for messier things than the natural world like human language, there aren’t simple models and thus people argue that AI is the only way: take masses of data to form unintelligble models.

A big knock on AI is that because it doesn’t really let you understand the things you’re predicting, it’s unscientific. And in a formal sense, I think this is true. But instead of spending our effort on a neverending (and probably fruitless) quest to make AI fully interpretable, I think we should recognize that science is only one possible tool for predicting and controlling the world. Compared to science, black-box prediction has both strengths and weaknesses.

https://open.substack.com/pub/noahpinion/p/the-third-magic?r=q167&utm_campaign=post&utm_medium=email

The challenge, as this article points out, is what are the implications if we can’t understand the models? How good are our predictions really? Or, more importantly, how can we know those predictions are off? ChatGPT is a good example of this as it sound authoritative (and often is) but its not easy to know if its accurate.

AI researchers use heartbeat detection to identify deepfake videos

  • September 5, 2020September 5, 2020
  • by Andy

The war in creating and detecting deepfakes continues but this is an interesting approach using biological markers to detect them vs. digital ones.

Existing deepfake detection models focus on traditional media forensics methods, like tracking unnatural eyelid movements or distortions at the edge of the face. The first study for detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual cues such as how blood flow causes slight changes in skin color into a human heartbeat. Remote PPG applications are being explored in areas like health care, but PPG is also being used to identify deepfakes because generative models are not currently known to be able to mimic human blood movements.

https://venturebeat.com/2020/09/03/ai-researchers-use-heartbeat-detection-to-identify-deepfake-videos/

Can data poisoning thwart face recognition systems?

  • August 4, 2020August 4, 2020
  • by Andy

Application of data poisoning for ML face recognition systems. Interesting approach. Could this be practically scaled for seamless use by everyone?

Fawkes isn’t intended to keep a facial recognition system like Facebook’s from recognizing someone in a single photo. It’s trying to more broadly corrupt facial recognition systems, performing an algorithmic attack called data poisoning. The researchers said that, ideally, people would start cloaking all the images they uploaded. That would mean a company like Clearview that scrapes those photos wouldn’t be able to create a functioning database, because an unidentified photo of you from the real world wouldn’t match the template of you that Clearview would have built over time from your online photos.

https://www.nytimes.com/2020/08/03/technology/fawkes-tool-protects-photos-from-facial-recognition.html

However, is it already too late to do via technical means and only legislation can regulate this?

But Clearview’s chief executive, Hoan Ton-That, ran a version of my Facebook experiment on the Clearview app and said the technology did not interfere with his system. In fact, he said, his company could use images cloaked by Fawkes to improve its ability to make sense of altered images. “There are billions of unmodified photos on the internet, all on different domain names,” Mr. Ton-That said. “In practice, it’s almost certainly too late to perfect a technology like Fawkes and deploy it at scale.”

https://www.nytimes.com/2020/08/03/technology/fawkes-tool-protects-photos-from-facial-recognition.html

Watch Tesla’s neural net labeling complexity for stop signs+

  • April 26, 2020April 26, 2020
  • by Andy

Insightful and interesting video from Andrej Karpathy on the challenges of tuning and labeling different situations for self-driving. The stop sign variables show how difficult a problem this is. Getting to 80% is probably not too difficult. Getting to 95% is very difficult. Getting the last 5% and then 1% is not only a massive challenge but you can see how labeling could introduce false positives and negatives and actually hurt system performance. For many applications, 95% is good enough but not here.

AI researchers propose ‘bias bounties’ to put ethics principles…

  • April 18, 2020April 18, 2020
  • by Andy

Finding bias in machine learning models is a wholly different pursuit than bugs in rules-based code. While incentives and “red teaming” proposals follow a familiar model, this should follow more along the lines of adversarial ML. However, since most models are not exposed publicly, owners would have to agree to expose them opening up additional hurdles.

Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.

https://venturebeat.com/2020/04/17/ai-researchers-propose-bias-bounties-to-put-ethics-principles-into-practice/

Robots Welcome to Take Over, as Pandemic Accelerates Automation

  • April 11, 2020April 11, 2020
  • by Andy

As I talk about often, its cultural changes in society’s acceptance and changes in needs that drives new tech adoption, not the technology itself.

“Pre-pandemic, people might have thought we were automating too much,” said Richard Pak, a professor at Clemson University who researches the psychological factors around automation. “This event is going to push people to think what more should be automated.”

https://www.nytimes.com/2020/04/10/business/coronavirus-workplace-automation.html

An Algorithm That Grants Freedom, or Takes It Away

  • February 9, 2020February 9, 2020
  • by Andy

This NYT article on how algorithms are being used for various predictions in the justice system is insightful but hardly surprising.

In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years. The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html

The problem here isn’t that algorithms are being used in these ways. In fact, its likely better to remove the frailties and individual biases at play often in these life changing decisions. However, there are two interelated issues here:

  1. Lack of transparency breeds conspiracy. Its easy to claim that a heartless algorithm is wrong in making these major decisions but the bigger issue is that we don’t know how, by whom and with what training data they were developed.
  2. Biases will be built in. We like to think algorithms don’t have biases but they were trained on human data. Humans have biases. There are systematic ways to work those out of machine learning algorithms but that’s a non-trivial effort. Per #1, unless we have transparency, we cannot make these better.

Governments should open these up (with proper privacy protections) to researchers who can discover and fix the biases. Only then will we trust them.

The Secretive Company That Might End Privacy as We…

  • January 18, 2020January 23, 2020
  • by Andy

Expose on Clearview AI who has built a ~30bn (and growing) image database scraping public sources like social media — without your knowledge or permission. Its used currently, as far as we’re told, solely for law enforcement. However, this puts the pressure on the privacy matters that have been gaining voice as computer vision neural nets go mainstream and improve over 95% accuracy.

While this is new ground technically, its not in the sense of a new technique which can greatly help society (solve crimes) can also be misused (including by those entrusted to use them properly). Just one example from some smart NYT reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

We don’t have privacy now and its regularly being eroded by cameras, online data, proprietary behavioral databases (e.g. Equifax).

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Banning doesn’t work and won’t here especially how technology easily spans borders where the legal standard varies.

“It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” said Al Gidari, a privacy professor at Stanford Law School. “Absent a very strong federal privacy law, we’re all screwed.”

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

As I’ve discussed before, this will not be solved by policy or technology solely. Its those two plus real criminal penalties for the violators and simple controls everyone can use like in a decentralized social network.

Watch this space.

The Automation of Healthcare

  • January 3, 2020
  • by Andy

Below are a couple of competing views on the coming automation of healthcare. Healthcare is ripe for it as its a non-scalable industry. Highly educated service providers are paid for their time including doing menial and repetitive tasks that could be automated. This is why your doctor charges so much (amongst other inefficiencies).

The other factor here is that, in the US at least, you own your medical data. Few people feel like they own it but legally you do. Its just locked away with your doctors, labs and other health providers. All of the big tech companies, as well as several healthcare tech providers, are working on actually providing access and portability to this data.

Once that is available, I have a view that there’s a big opportunity for healthcare providers to offer APIs into them with that standardized data. You want a 2nd opinion, submit your data to another doctor. You want to see if a pharmaceutical or supplement helps with your condition, submit your data. Behind those APIs will need to be reasonably sophisticated AIs that can assess and respond to the patterns seen in your data. Doctors provide not only the input to the AI but the specialization needed on top for unique cases and follow-up. This makes healthcare a lot more scalable.

In his upcoming book, The Future Is Faster Than You Think, which will hit bookshelves in late January 2020, Diamandis makes the case for why he believes big tech companies are going to be running healthcare by 2030. In December, he came to Fast Company’s offices to make the case for why Big Tech is the doctor of the future.

https://www.fastcompany.com/90440921/amazon-and-apple-will-be-our-doctors-in-the-future-says-tech-guru-peter-diamandis

Hamish Fraser first encountered Babylon Health in 2017 when he and a colleague helped test the accuracy of several artificial intelligence-powered symptom checkers, meant to offer medical advice for anyone with a smartphone, for Wired U.K. Among the competitors, Babylon’s symptom checker performed worst in identifying common illnesses, including asthma and shingles. Fraser, then a health informatics expert at the University of Leeds in England, figured that the company would need to vastly improve to stick around.

https://www.fastcompany.com/90440922/should-you-get-medical-advice-from-a-bot-doctors-arent-so-sure

President Nixon Never Actually Gave This Apollo 11 Disaster…

  • December 1, 2019
  • by Andy

This is impressive and, as the quality goes up and the computing + expertise needed to produce these goes down, we’re going to enter an even stranger phase of “fake news” and an alternate facts.

A new MIT film installation uses that exact premise to shed light on so-called deepfake videos and how they are used to spread misinformation. Deepfakes use artificial intelligence technologies to create or alter a video to make them untrue in some way. MIT’s Center for Advanced Virtuality created a video of Nixon giving a speech that was actually written for him — but that he never ended up delivering. The video is the centerpiece of “In Event of Moon Disaster,” opening Friday at the International Documentary Film Festival Amsterdam (IDFA). The installation was supported by the Mozilla Foundation and the MIT Open Documentary Lab.

https://www.wbur.org/news/2019/11/22/mit-nixon-deep-fake

Tech And The Military: 15+ Tech CEOs And Investors…

  • September 5, 2019
  • by Andy

An important debate going on and I’m going to sidestep the core question as it is multi-faceted and complex. However, there is another perspective to consider here which is that workers (well, at least tech knowledge workers) have shown there is now another ability to express their views meaningfully in the debate. They can now substantively communicate (and protest) their views on the military and that of their employer. This “public intellectual” debate has been too limited (in the US at least) since WWII (noted that there have been protest voices with every military action but not quite like this).

In April 2018, a group of over 3,100 Google employees wrote a letter to the company’s CEO in protest of its work on Project Maven, a military AI project that would help drone warfare tools become more accurate.
The letter explained, “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology… This plan will irreparably damage Google’s brand and its ability to compete for talent.”


Former Google Cloud CEO Diane Greene responded to workers’ concerns by announcing in June 2018 that the company would fulfill the requirements of its existing Department of Defense contract for Project Maven, but that it would decline to pursue follow-on contracts or similar projects in the future.

https://www.cbinsights.com/research/tech-military-government-partnerships-quotes/

BTW, totally agree with Mark Benioff about the need for exec level ethics in tech advisors. Mark, I’m available…

[Salesforce employees] ask me questions I don’t have the answer to and I don’t have the authority or understanding to be able to opine on… I said I need a team that I can pivot to to say, “What is the right thing to do here?” And I’m like, it’s crazy that we don’t have a team like this. And it’s crazy that no company does.

Mark Benioff

Will California’s New Bot Law Strengthen Democracy?

  • August 18, 2019August 18, 2019
  • by Andy

But what is the definition of a “bot” and where does it end at? Most digital businesses are 80%+ automated in their selling and incentives. Does that fall under this?

When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives. In other words, they are an especially useful tool, considering how politics is played today.

https://www.newyorker.com/tech/annals-of-technology/will-californias-new-bot-law-strengthen-democracy

Sign up for the ADV newsletter

  • Terms and Conditions
  • Privacy Policy
Copyright 2018-23 Assert Digital Ventures, Inc.
Theme by Colorlib Powered by WordPress