mfw@wyle.org | 1.425.249.3936
Thursday, September 26, 2019
Wednesday, September 25, 2019
Saturday, September 21, 2019
Sunday, September 15, 2019
Ig Noble Prizes
https://arstechnica.com/science/2019/09/2019-ig-nobels-honor-cubed-wombat-poo-magnetic-roaches-and-more/
- temperature differences in left and right side of French postmen's private parts.
- Automatic diaper changer for humans
- more
Saturday, September 14, 2019
Friday, September 13, 2019
Jetsons then and now
if you are old enough to remember the Jetsons intro:
. . . then you will love this new Arconic ad:
Wednesday, September 11, 2019
Sunday, September 8, 2019
AI, human behaviors, bias, subtle unobserved data, & causality (long but worth it)
In my day job, I am now trying to measure and predict which software efforts from my teams will deliver how much incremental revenue for my company. The inherent uncertainties (errors) of these estimations are larger than any forecast value. 70% (or more) of all software efforts fail across most industries. The most frustrating part of my experience is that everyone lies and pretends their estimates are always perfect. And there are no data or scholarly analysis for the justifications. And, of course, the real "attribution" of any revenue to any specific effort in the complex ecosystem of a marketplace is very dodgy and is itself uncertain.
My colleagues in "analytics" data science claim to measure (perfectly, of course!) the exact percentage of people who "would have bought anyway" without my team's marketing effort / campaign / incentive. The assumptions they make are (of course!) tested with biased assumptions in the test formulation and the predictive accuracy on unknown data are never tested. But in general, they do great work and I agree with all of their reasoning if not all of their numerical methods.
But the main purpose of my rant here for why everyone in AI should pay more attention to the points Taleb raises in Incerto (uncertainty) is that many of the axioms and foundations upon which we are basing "AI" and "data science" are themselves very questionable:
- Judea Pearl has recently explained why our perceptions of "causality" are fundamentally wrong; current AI methods, and applications are exacerbating the consequences
- Megan Stevenson has recently shown why judges misperceptions and reliance on AI algorithms has not improved fairness in our legal proceeding
In this latter case there are very many reasons we need to be even more careful about AI in our judiciary and legal proceedings, not just those points raised by the author. The origin data upon which judgments and the AI calculates are biased because of the humans who acted, recorded, selected, and encoded. Judges, using ancient human unconscious perceptions of other humans' feelings, motivation, trustworthiness.
Tuesday, September 3, 2019
Google is now monetizing those annoying "I'm not a robot" challenges
Google sells, and more frequently gives away the use of reCAPTCHA to protect your web site or web pages from bots. When their algorithm suspects you are a bot or that you are abusing a site with a lot of traffic, they challenge you with a problem to solve that is difficult for a bot.
These challenges and the problems are usually image classification problems that help Google improve their image classification algorithms. These human judgments the world provides to google are called "labels" in machine learning.
Now, Google is selling the use of those annoying reCAPTCHA challenges that billions of people perform for them for free. If you need your datasets labeled by people challenged by those annoying "I'm not a bot" prompts, you can sign up and pay Google for their free labor.
Brilliant!
Monday, September 2, 2019
Subscribe to:
Posts (Atom)