Добро пожаловать, Гость
Логин: Пароль: Запомнить меня
  • Страница:
  • 1

ТЕМА: Explainable AI and Human Computer Interaction.

Explainable AI and Human Computer Interaction. 5 года 4 нед. назад #3518

  • Charlotte
  • Charlotte аватар
  • Вне сайта
  • Новый участник
  • Сообщений: 1
  • Репутация: 0
30.jpg


What I just talked about in terms of human-centered design. I'm gonna connect this a little bit to AI and make it a little more visceral and hopefully illustrate why it's important that we need to think about this in a very particular talk in bridging to the next session. I'm still me so let's start with a brain-teaser. I have mine and 1% of the rocks in this mine are this rare and valuable mineral unobtainium. I have a detector and if unobtainium is present, my detector is perfect it always detects the unobtanium but if it's not there I get a false positive 90% of the time. I'm sorry to let's say that correctly who it's correct 90% of the time so it's pretty good it's perfect when it's there and pretty good when it's not there, so I have a rock the detector says this is a piece of unobtainium if the detector is correct it's worth $1,000. I'm selling this rock who's buying all right, so we've got buyers. No thank you and please give me three more minutes to do the math, all right so, even if you're not willing to commit you to see where there's an issue here like we've got in probabilities we have to interpret and even if you've taken the intro stats class where you learn how do this calculation you have to stop and think so the answer is no. Don't buy the rock because if I have a hundred rocks that would be worth this price only about 1 and 11 that would be the detector would say this is unobtainium would actually be unobtainium because I've got a hundred rocks one of them is actually the unobtanium but I got 10 false positives so one an 11. So, this is probably not unobtainium not a safe bet not worth $200 now let's change the story.

I'm screening at the airport I have a detector it's as a particular person as a threat what's the probability this person is actually a threat, and we can change the story to any of the applications we're going to talk about with AI image detection and image analysis so this is something that the contexts are increasing and growing in both military and civilian applications so long-standing science and this is just one tiny example making probabilistic judgments is hard and cogs has been working on this for a very long time moreover depending on how probabilities are presented people will make different choices, so I can tell you the same story and say well would you rather case A or case B. I changed the wording and I don't change the math and one case everybody chooses A or any different everybody chooses to case B so how we present probabilities and how people perceive risk also comes into these judgments and so when we talk about explainable AI it's not sufficient for explainable AI to just give people probabilities. In fact, a lot of work right now is on providing those probabilities and hopefully,
I have demonstrated that's not quite enough information for humans to make good judgments we need to and it even goes beyond well these are the pixels that are causing this probability. We need to help the humans interpret those probabilities correctly, so we need to think about the human designer centered design and the human-computer interaction so that we can help people make good decisions from the information that the AI is providing us. Thank you.
Последнее редактирование: 5 года 4 нед. назад от Charlotte.
Администратор запретил публиковать записи.
  • Страница:
  • 1
Работает на Kunena форум