Jillian Drummond
7 min readDec 10, 2019

--

Bearing the Burden of Innovative Progress.

No one should bear the burden of progress in a new world of endless possibilities.

Photo Credit: Jason Moreh via StockVault

Do you need a ride? Immediately? Click on a ride share service provider. Are you craving food at 3am? Tap the delivery service app. Want to a geneticist to match you with potential romantic partners based on your DNA? There’s an App for that, too. Our lives are growing more convenient and more efficient, is there a cost?

AI is rapidly changing the world around us. The MIT Review identifies that, “Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.”

As a result, ethical tech has become a necessary field of inquiry. For example, one might ask who benefits from something like genetic procreation considering access and the cost? One singular innovation cannot solve inequality and every injustice, but can it attempt at an ethical and humanistic approach — for the sake of having a better product, but also, as a consideration for producing less harm.

Social Workers in Tech?

Upon entering into the field of social work you immediately begin to acquire skills that aide in the well-being of the individual, family, and community. Traditionally, when people think of social workers, they recall us as case managers in child welfare or some other direct work with the people. This is often the case. We are, in fact, the “helping professionals.” Social worker’s are invested in understanding how the macro- level systems effect people at the individual or micro-level, using interventions to help others survive in systems that can be oppressive, and taking holistic approaches to problem solving.

Our mission at the Columbia School of Social Work?…“make waves, move mountains, change lives.”A new minor Emerging Technology, Media, and society is invested in teaching social workers to “build a critical foundation for understanding the interplay of digital technologies and society and the important role of social workers in this space.” Inextricably, a way of evaluating the impact of tech within communities we serve, and our society as a whole. With an understanding how racism, gender biases, ableism, and other prejudices have informed the way our current systems guide the way we function as a society, it’s not a surprise that those same issues are arising in tech today where AI reflects our current biases and amplifies them to a higher degree. Using an Ethical Matrix we can assess the projected impact on a community, and individual members within that community, and especially those most vulnerable to harm. Ruha Benjamin is a sociologist and scholar on issues related to ethics, bias, diversity in tech, and the strategies to build a better ethical framework around tech that impacts us all, but in different ways. In her book, The New Jim Code, she highlights something critical, “many people are forced to live in someone else’s imagination.”

Want to hire people using AI? Try HireVue.

Traditional face-to-face interviews are being replaced by online interviews, using AI. So much so that at Duke University, the “faculty and advisers are working to help prepare students for this new challenge in the application process.” Challenge? I’ll get to that soon.

HireVue’s messaging is that it uses AI to rate potential job seekers through video analysis by screening employees and rating them before a recruiter even sees the interview, some objectives:

● Screen job seekers through an on-demand video, in which they answer set

questions

● Creates “personality profiles” by using AI algorithms to analyze and predict

performance, based on interview

The potential for bias is apparent and yet this product is driving hiring practices at Fortune 500 companies and across the globe — “Used by over “700 employers, including Mt. Sinai, Urban Outfitters, Honeywell, and Intel.”

Beneficiaries To receive an interview, a job candidate would have met all the basic job requirements, i.e. education, years of applicable experience, etc. So all else being equal, there stands a chance that certain features & attributes are deemed preferential. Using the Ethical Matrix, I’ve analyzed stakeholders and who would seek to benefit: HireVue, and white male applicants, who don’t present as experiencing disability.

…Bearing the Burden.

The company’s stated purpose is for “Better hiring with AI-Driven predictions.” What is ‘better’? According to a recent survey by Pew Research Center, white men view reverse discrimination as a primary to deterrent to their employment success. Marketwatch reports:

“Asked whether a nationally representative sample of white men with jobs STEM fields whether they thought their gender made it harder for them to succeed. Of the 14% who said yes, more than 1 in 10 said they had been affected by reverse discrimination. When Pew posed a similar question about race to the survey respondents, nearly 20% of those who said race made their job harder cited reverse discrimination as the reason for their challenges.”

The U.S. Department of Labor’s Unemployment statistics show that the national average for unemployment in is 3.9%, 3.4% for white males and 6.5 and 4.5 for Blacks and Latinx, respectively.

The people who bear the burden here are applicants that are often already discriminated against:

*Black Americans, in 2017, Harvard Business Review reported on a study by the National Academy of Sciences indicating evidenced discrimination against Black Americans.

“We wondered if this level of discrimination might be influenced by applicant education, applicant gender, study method, occupational groups, and local labor market conditions. When we controlled for these factors, we found that none account for the trend in discrimination.” ~Quillian et al., Harvard Business Review

*Latinx, Asian, and other statistical minorities

*Immigrants, who might have different cultural expressions

*English as a Second Language

*Applicants experiencing disability or chronic illness

Under Federal Law, enforced by the EEOC, “employers cannot discriminate based on, it is illegal to discriminate against someone (applicant or employee) because of that person’s race, color, religion, sex (including gender identity, sexual orientation, and pregnancy), national origin, age (40 or older), disability or genetic information.”

FindLaw explains what this means for the hiring process, “Generally, employers should avoid questions that relate to classes that are protected by discrimination laws. Following are types of queries that should be avoided by employers during the interview:

Using AI can provide a work around. This concern has resulted in a number of anti-discrimination lawsuits against HireVue. Drew Harwell of The Washington Post reports “A prominent rights group is urging the Federal Trade Commission to take on the recruiting-technology company HireVue, arguing that the firm has turned to unfair and deceptive trade practices in its use of face-scanning technology to assess job candidates’ “employability. The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices…”

You can find HireVue’s reposnse here. Within HireVue’s own assessment, there exists multiple biases, grounded in the implicit bias we hold as humans which is only being amplified with their technology. For example, HireVue’s FAQ includes the following:

Will I be rejected if I’m not as expressive as others or am on the autism spectrum? “Many people on the autism spectrum excel in job roles that don’t require you to be chatty or socially charismatic at all. Because each HireVue Assessment model (or algorithm) is designed for a specific job role, if you’re applying for a technical or analytical job type, it’s likely that your expressiveness matters very little to your ability to be successful in the job.”

The Bias here implies all people on the autism specturm are similar enough to predict an outcome, and also that everyone on the autism spectrum applies only for certain jobs.

‘We definitely can’t wait for Silicon Valley to become more diverse’ ~Ruha Benjamin, Princeton University

Also, while the employers benefit from using this technology, as a means of productivity and efficiency, they stand to lose the ability to pick up on unique human emotions which AI is only beginning to recognize, with any reliability.

Hi, I’m a social worker, how can I help you? As a Social Workers are equipped to serve in various capacities in the tech world. Not just to fix harm already done, but to guide the future progress of systems that affects us all.

“We are called to be the architects of our future, not its victims” — Buckminster Fuller

HireVue just one example in a rising emergence of new technologies. The future progress of our society is dependent on revolutionary innovations that vastly improve upon our antiquated systems, but it can’t be truly revolutionary if it continues to discriminate. No one should bear the burden of progress in a new world of endless possibilities.

Recommendations.

· Create a diverse, equitable and inclusive team, let their voices be heard and concerns addressed. Address internal biases. Ensure ethical principles align with projected goals.

· Reach out to different community/communities for feedback (not simply to receive buy in or provide incentives so they like and accept your product).

· Understand the implications.

· Build a team that can identify ‘’’nuances’’ in the human perspective — The Founder(s) might not be expected to know all ethical concerns, even as it relates to their product, but should be open to understand that tech can have different implications for different people. Include members from communities that may be adversely affected.

· Re-evaluate your product or service — When your product reaches or exceeds scale, re-evaluate impact and limitations, and be open to fix any shortcomings.

--

--