Despite all of the great hype that Artificial Intelligence is receiving these days, we are not exactly at the point when Skynet would command its army of Terminators to go conquer the world. The AIs are tasked with more mundane tasks of analyzing the data on what is going on around us and making predictions, be it how much stuff people are going to buy next summer or who is more likely to pay back the loan they got at the bank. In essence, AIs are little more than tools to create mathematical models and equations; you feed in the data and get out some prediction based on it.
Herein lies the problem: you need to be feeding into the model some objective data and making predictions based on it. Now the fun part starts if you are some SJW type and you measure the fairness of the system based on the outcome. If the machine studies the real world inputs and gives you the result you don’t like it can only be one of two things: your assumptions about the world are wrong or the machine is wrong. Guess which explanation the SJWs choose? So, robots are biased.
In fact, the robots are so based that IBM discovered the need to create its own thought police, lest the robots exhibit some dangerous wrongthink and expose the reality for what it is. From TechCrunch:
“The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing ‘potentially unfair outcomes as they occur,’ as IBM puts it. It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.”
Though the people in the Silicon Valley remain sheltered from real life, occasionally it hits them in the face. And they feel the urgent need to explain this reality away, be it the prevalence of “pale males” among software developers or that people who live in certain ZIP codes for some unknown reason do more crime than people who live in other ZIP codes, much less the ratio of “pale males” to “colored males” living within the said ZIP code. It is very entertaining to see how the Silicon Valley types try to explain the boo-boo away with their usual talmudry. Also from TechCrunch:
“‘What’s relevant is that the police department has made an institutional decision to over-police that neighborhood, thereby generating more police interactions in that neighborhood, thereby making people with that ZIP code more likely to be classified as dangerous if they are classified by risk assessment algorithms,’ Ball said.
And even if the police were to have perfect information about every crime committed, in order to build a fair machine learning system, ‘we would need to live in a society of perfect surveillance so that there is absolute police knowledge about every single crime so that nothing is excluded,’ he said. ‘So that there would be no bias. Let me suggest to you that that’s way worse even than a bunch of crimes going free. So maybe we should just work on reforming police practice and forget about all of the machine learning distractions because they’re really making things worse, not better.’
He added, ‘For fair predictions, you first need a fair criminal justice system. And we have a ways to go.'”
Now, if you thought that robots are just racist, well, you would be wrong—they are sexist as well! If a bank adopts an automated “colorblind” procedure of assessing the probability of people repaying their loans based on things like income, spending behaviors, collateral etc., for some unknown reason women are less likely to get the loan than when a human does it in a meeting. Women are more likely to get a loan if they show up to the bank in person, and out of two objectively equal loan applications a woman is more likely to get a loan. So, a systemic gender bias among the banking workers exists, and it favors women. If a woman shows up, male bankers are more likely to “white night,” and female bankers to show solidarity. But a robot that is impartial and assesses applications on merit… is problematic.
I think that by now it should obvious to anyone that given the current complexity of the world the one who will have better AIs will control the world. They will be able to make better decisions and be more efficient and effective. But if you create “crap in—crap out” AIs you are not going to have the best and will lose. So, I figure that the SJWs and their robots would always be bested by based robots.