It seems that just as quickly as Artificial Intelligence systems show promise in transforming how we work, live, drive, and even get treated by law enforcement, scholars and others question the ethics that surround these autonomous decision-making systems. The ethics of AI focuses on whether or not decisions are being made that discriminate against people on the basis of race, religion, sex, or other criteria.
AI’s profound bias problems have become public in recent years, thanks to researchers like Joy Buolamwini and Timnit Gebru, authors of a 2018 study that showed that face-recognition algorithms nearly always identified white males but recognized black women only two-thirds of the time. Consequences of that flaw can be serious if the algorithms cause law enforcement to discriminate when identifying suspects, or doctors use the algorithms to decide who to treat.
The challenge for developers is to remove bias from AI, which is complicated because the system depends upon the data that goes into the system. Training data must be vast, diverse, and reflective of the population so that the AI system has a strong sample.
discuss two examples of situations where bias can skew the data causing an AI system to discriminate against certain groups of people. How can fairness be built into the AI systems? Are the advantages that AI bring to a system worth the bias, if uncorrected?
15% off for this assignment.
Our Prices Start at $11.99. As Our First Client, Use Coupon Code GET15 to claim 15% Discount This Month!!
Information about customers is confidential and never disclosed to third parties.
No missed deadlines – 97% of assignments are completed in time.
We complete all papers from scratch. You can get a plagiarism report.
If you are convinced that our writer has not followed your requirements, feel free to ask for a refund.