Why Australia is quickly developing a technology-based human rights problem

Human rights advocates have called on the Australian government to protect the rights of all in an era of change, saying tech should serve humanity, not exclude the most vulnerable members of society.

Salesforce’s Kathy Baxter

(Image: Jason McCormack/Australian Human Rights Commission)

Artificial intelligence (AI) might be technology’s Holy Grail, but Australia’s Human Rights Commissioner Edward Santow has warned about the need for responsible innovation and an understanding of the challenges new technology poses for basic human rights.

“AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we’re told AI will bring us everything from the perfect dating algorithm to interstellar travel — it’s easy in other words to get carried away, yet we should remember AI is still in its infancy,” Santow told the Human Rights & Technology conference in Sydney in July.

Santow was launching the Human Rights and Technology Issues Paper, which was described as the beginning of a major project by the Human Rights Commission to protect the rights of Australians in a new era of technological change.

The paper [PDF] poses questions centred on what protections are needed when AI is used in decisions that affect the basic rights of people. It asks also what is required from lawmakers, governments, researchers, developers, and tech companies big and small.

Pointing to Microsoft’s AI Twitter bot Tay, which in March 2016 showed the ugly side of humanity — at least as present on social media — Santow said it is a key example of how AI must be right before it’s unleashed onto humans.

Tay was targeted at American 18- to 24-year olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”.

In less than 24 hours after its arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets.

Tay started fairly sweet; it said hello and called humans cool. But Tay started interacting with other Twitter users and its machine learning architecture hoovered up all the interactions, good, bad, and awful.

Some of Tay’s tweets were highly offensive. In less than 16 hours Tay had turned into a brazen anti-Semitic and was taken offline for re-tooling.

This kind of interaction had been observed before in IBM Watson which once exhibited its own inappropriate behaviour in the form of swearing after learning the Urban Dictionary.

hrbrett-solomon.jpg

(Image: Jason McCormack/Australian Human Rights Commission)

As Human Rights Commissioner, Santow wanted to show just how easy it is to have AI meant for good turn bad.

“As the technology progresses, AI will be very useful in the real world; the applications are almost limitless … while prediction is essential to almost every human activity, we humans are notoriously bad at it. If AI improves the accuracy of our forecasting, this could change everything,” Santow said.

He offered the roomful of human rights-focused individuals another example, this time of where AI intended for good was in fact favouring the privileged.

Technology was used to decide if a prisoner was to be released from parole, and as Santow explained, it involved a number of factors including the prisoner’s level of remorse, their outside support network, and whether the individual presented an intolerable risk to the community.

The study on Israeli parole judges showed that if an application was made at a certain time of day that the individual was less likely to be released on parole.

“If your application was first on a list, you had about a 65 percent chance of being released; if you were the last before lunch, your chances were almost zero. If you were the first after lunch, your chance was back up to 65 percent before dropping back to around zero at the end of the day,” he explained.

“Unlike humans, computers don’t get tired, cross, or hangry. If powerful computers could be deployed on big datasets, this could improve how we make decisions.”

At least that was the idea behind the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) program, which was used to determine a prisoner’s risk to society across several US states.

The COMPAS tool was used to trawl through the correctional system’s vast datasets and a human judge would then consider the determination it made before making a ruling on a prisoner’s eligibility for parole.

“It was attractive as it emphasises data over subjectivity,” Santow said.

However, he said analysing a sample from Florida’s court records showed that African Americans were more than twice as likely as similar Caucasian offenders to be classified medium-high risk.

“COMPAS gave the Caucasian a lower risk score — race, interestingly, was not a factor that COMPAS considered,” he continued.

“The problem almost certainly lay with historical data relied on by COMPAS. We know African Americans have faced more police scrutiny and be more likely to receive heavier sentences, more likely to be convicted of crimes associated with poverty. We wouldn’t be surprised if COMPAS associated factors correlating with a person’s race, such as where they lived, to the risk of committing a crime.”

See also: UK Ministry of Justice using data to gain control of prisons (ZDNet)

“If you follow the headlines, you’ll see that AI is sexist, racist, and full of systematic biases,” Kathy Baxter, user research architect at Salesforce said during her talk at the Human Rights & Technology Conference.

But why is it happening? The people creating these tools aren’t necessarily doing it with evil in mind, and they probably don’t want to perpetrate the bias, especially if they’re actually trying to resolve it. But according to Baxter, the problem is bias is so difficult to see in data. Equally complex, she said, is the question of what it means to be fair.

“AI is based on probability and statistics,” she continued. “If an AI is using any of these factors — race, religion, gender, age, sexual orientation — it is going to disenfranchise a segment of the population unfairly and even if you are not explicitly using these factors in the algorithm, there are proxies for them that you may not even be aware of.

“In the US, zip code plus income equals race. If you have those two factors in your algorithm, your algorithm may be making recommendations based on race.”

Although representing the technology behemoth that is Salesforce, Baxter said companies have a responsibility that stretches past shareholders to be responsible in developing technology.

“The data the machines are given are biased — we see systematic bias because AI is not neutral, it is a mirror that reflects back to all of us the bias that is already in our society and until we can remove bias from our society, we have to take active measures to remove it from the data that feeds and trains AI,” she continued.

“Let’s be frank here, the people creating AI today are a very privileged population. They often are not the subject of the COMPAS parole recommendation system, they’re often not individuals that have to live with an AI’s recommendation as to whether or not they qualify for social services or benefits.”

As a result, Baxter said it’s difficult to be able to prevent AI from causing wide-scale impact that violates human rights.

“You need to do this research in advance to determine who is going to be impacted and give perspective outside of the Silicon Valley bubble, or whatever location bubble you are in,” she continued.

Offering yet another example of a government-backed initiative that resulted in citizens being treated with bias, Baxter detailed how a county in Pittsburgh is using AI to identify children that are at the highest risk of abuse.

“There are way more reports of abuse then they have investigators, but they’re finding people of colour tend to have their children removed more often than Caucasian families,” she said.

“One of the reasons is people of colour tend to be in the system — there’s a lot more data that’s known about them because they get more social services. The more data that a government or a private company like Facebook has about you, the more inferences it can make about you and the more it can have control over what you get access to or do not get access to.”

While COMPAS is a project undertaken in the US, it isn’t too far detached from what is starting to happen in Australia.

In New South Wales, the police used an algorithm to create a list of people on what they called a suspect target management plan, which resulted in the list members being targeted with extra police scrutiny.

Santow said last year it was revealed that over half of the 1,800 people on the list were Aboriginal or Torres Strait Islander.

“Yet fewer than 3 percent on people in this state are indigenous,” he said.

“One response would be to reject technological innovation, but we would likely fail; new technology is coming whether we like it or not … we could lose important opportunities that benefit from AI and related technology.”

The smarter alternative, he said, is to understand the challenges new technology poses for basic human rights and establish framework that addresses those risks. He said the first thing innovators need to do is listen to all parts of the community.

“As we make and consume technology, we are simultaneously the revolution’s beneficiaries, and also the ones facing the guillotine; as we surround ourselves with the ever increasing numbers of more powerful tech gadgets, we risk sleepwalking into a world that cannot and does not protect our most basic human rights,” the commissioner continued.

“Technology should serve humanity, whether it does will depend in part on us, the choices we make, and the values we insist on.”

Late physicist Stephen Hawking famously said AI may be the best or worst thing to ever happen to humanity, and entrepreneur Elon Musk has long held the position that innovators need to be aware of the social risk AI presents to the future.

Australia’s Chief Scientist Alan Finkel, also speaking at the Human Rights & Technology Conference, shared a story of a lady he called Aunty Rosa. She was a Holocaust survivor, and to her, the detail Finkel shared with her on what AI is capable of reminded her all too well of her younger years.

“For four years she lived in hiding in Lithuania, a young Jewish woman persecuted for the crime of being alive,” he explained. “As I drew my pictures of the future, she saw only the brutal truth of the past: A life lived in fear of being watched by neighbours, by shopkeepers, by bogus friends. To this day, her fear is so overwhelming that she would not consent to me using her real name.”

It’s a scenario that has been compared to modern-day technological advancements many times, but reigniting the conversation around something that has resonance, Finkel said it’s important to recognise that it was data that made the crime on the scale of the Holocaust possible.

“Every conceivable dataset was turned to the services of the Nazis … Census records, medical records, even the data from scientific studies — with a lot of data, you need a sorting technology and the Nazis had access to one — punch cards,” he explained.

“Little pieces of stiff paper with perforations in the rows and the columns marking individual characteristics like gender, age, and religion and that same punch card technology that so neatly sorted humans into categories was also used to schedule the trains to the death camps.”

That was data plus technology in the hands of ruthless people, he added.

Historically, Australia has been considered as a safe place to live, Finkel said, a society where people trusted in their government and trusted in each other. But with initiatives driven by data — and run by the federal government — that have placed the country’s most vulnerable in harm’s way, it’s getting difficult to still define Australia in such a way.

At the end of 2016, the Department of Human Services (DHS) kicked off a data-matching program of work that saw the automatic issuing of debt notices to those in receipt of welfare payments through the country’s Centrelink scheme.

The program had automatically compared the income people declared to the Australian Taxation Office (ATO) against income declared to Centrelink, and the debt notice — along with a 10 percent recovery fee — was subsequently issued when a disparity in government data was detected.

One large error in the system dubbed “robo-debt” was that it was incorrectly calculating a recipient’s income, basing fortnightly pay on their annual salary rather than taking a cumulative 26-week snapshot of what an individual was paid.

Between November 2016 and March 2017, at least 200,000 people were affected by the system.

The response from the Australian public was less than pleasant. Halting the system had been requested at length by the federal opposition, and a Senate Community Affairs References Committee reported to the government in June 2017 that it had repeatedly heard from individuals that the Online Compliance Intervention (OCI) system had caused them feelings of anxiety, fear, and humiliation, and dealing with the system had been an incredibly stressful period of their lives.

There were also reports of suicide.

But all of that aside, DHS acting deputy secretary of Integrity and Information Jason McNamara told the Finance and Public Administration References Committee in March the data-matching program went well because it produced savings.

There are even plans to expand the OCI program of work, with the Australian Transaction Reports and Analysis Centre (Austrac) calling the DHS-led initiative a “hugely effective” exercise.

Robo-debt came up a lot during the day-long human rights conference, and the consensus was clear: Human involvement should have occurred before the letters were, if at all, sent out.

Over recent months, the Australian government has received heat over its digital My Health Record, an initiative that is automatically signing up citizens for a medical record. In its initial form, the system had a number of glaring errors. For instance, records were unable to be completely deleted. Cancelling a record rendered it “unavailable” to healthcare providers, however it slated to be kept for 30 years after an individual’s death or, for 130 years after an individual’s date of birth if the date of death was unknown.

As TechRepublic’s Australian Editor Chris Duckett wrote:

As the Australian government in its various guises continues to deny the prospect of an automated Centrelink dreadbot being augmented with health data, reality keeps on pricking the bubble that My Health Record proponents seem determined to keep themselves encapsulated within.

The original legislation that backed My Health Record showed that it was open to allowing the Australian Digital Health Agency — the agency charged with overseeing the initiative and ensuring citizen information is secure — to pass information on to any government agency that can make a case for increasing public revenue.

SEE: The My Health Record story no politician should miss (ZDNet)

Only after an intense backlash, did Canberra back down and choose to cover up some of the holes in the legislation — including needing an order from a judical officer to now gain access to data, and delete actually meaning delete.

But the government intransigence and human rights faux pas don’t stop there.

Secretary of the newly formed Australian Department of Home Affairs Michael Pezzullo went on the record previously with his agency’s approach to AI, proposing a line in the sand, not just for border security but for every decision made in government that touches on a person’s fundamental human rights, calling it a golden rule.

“No robot or artificial intelligence system should ever take away someone’s right, privilege, or entitlement in a way that cannot ultimately be linked back to an accountable human decision-maker,” he said.

Before being merged into Home Affairs, Pezzullo was the Secretary of the Department of Immigration and Border Protection (DIBP).

In February 2014, DIBP accidentally published the details of almost 10,000 asylum seekers, including their full names, dates of birth, genders, nationalities, periods of immigration detention, locations, boat arrival information, and the reasons why an entrant was classified as having travelled into Australia “unlawfully”.

SEE: Australian Home Affairs thinks its IT is safe because it has a cybermoat (ZDNet)

Pezzullo’s department — headed by Minister for Home Affairs Peter Dutton — will also be responsible for the operation of a central hub of a facial recognition system that will link up identity matching systems between government agencies in Australia.

hralan-finkel.jpg

Australian Chief Scientist Alan Finkel

(Image: Jason McCormack/Australian Human Rights Commission)

The Australian government in February introduced two Bills into the House of Representatives to enable the creation of a system to match photos against identities of citizens stored in federal and state agencies.: The Identity-matching Services Bill 2018 (IMS Bill) and the Australian Passports Amendment (Identity-matching Services) Bill 2018.

The Bills will allow state and territory law enforcement agencies to have access to the country’s new face matching services to access passport, visa, citizenship, and driver license images from other jurisdictions.

The Face Verification Service (FVS) is a one-to-one image-based verification service that will match a person’s photo against an image on one of their government records, while the Face Identification Service (FIS) is a one-to-many, image-based identification service that can match a photo of an unknown person against multiple government records to help establish their identity.

Access to the FIS will be limited to police and security agencies, or specialist fraud prevention areas within agencies that issue passports, and immigration and citizenship documents, the government has claimed.

The FVS is now operational, providing access to passport, immigration, and citizenship images. The FIS will come online soon, with Home Affairs telling a Parliamentary Joint Committee on Intelligence and Security in May it had purchased a facial recognition algorithm from a vendor to be used for the FIS, despite claiming immunity on disclosing the contracted vendor.

The Joint Committee also heard from the Human Rights Commission’s Santow in May, who said the identity-matching Bills are at “high risk” of violating Australia’s human rights obligations.

According to Santow, there are four main areas of concern: Proportionality; autonomy; lack of democratic oversight; and the risk of fraud and other unintended consequences.

“The Bills are unprecedented in impacting on Australians’ privacy,” he said. “The problem with the Bills is some of the permitted purposes for sharing personal information is so broad that they could give especially law-enforcement and intelligence bodies almost unrestricted power to share personal data.”

Protections have not been written into the Bills, only being addressed in the explanatory memorandum, he said, which could lead to the “mass surveillance” of Australians.

Pointing to the early-2000s Australia Card idea and calling the concept authoritarian, Brett Solomon, executive director of global human rights, public policy, and advocacy group AccessNow said the idea of a biometric database brings the country back to the same place.

“There is very little push-back from within Australian civil society, even though the consequences are so great,” he told the conference.

“What is the accountability mechanisms for a false-positive or for a decision that’s made about you that criminalises you, even if it’s not you? How do we actually withdraw faces that don’t represent us … a whole range of questions on facial recognition and yet the Bills are before Parliament and may very well go through with the support of the opposition and suddenly we have a hackable, insecure database of our very identity that will, with artificial intelligence and the Internet of Things and geolocation, create the sort of things that Alan Finkel was talking about.”

SEE: Home Affairs thankful Australia’s diversity allows for improved facial recognition (ZDNet)

Solomon, whose organisation published a report on human rights in the digital era, is concerned that the Australian government will go too far with its nanny-state ideals.

“To be frank, this government is actually drunk on surveillance — there are so many laws that have been passed over recent years that it’s almost impossible to keep up, so many of the organisations that are working on these issues in Australia are voluntary organisations that are responding to this massive cybersecurity industry … plus a hyper-nervous government that is dealing with the reality of terrorism online and criminal activity online,” he told the Human Rights & Technology conference.

“I think we want to get a human rights outcome, or a better outcome for citizens — whichever way we frame it — having civil society working with champions within government, plus companies who can create a really great outcome … I’d like to encourage that kind of involvement.”

Former Australian Prime Minister Malcolm Turnbull, along with his then Attorney-General George Brandis, announced plans in July last year to introduce legislation that would force internet companies to assist law enforcement in decrypting messages sent with end-to-end encryption.

Questioning if the proposed legislation was technically possible, TechRepublic’s sister site ZDNet asked the prime minister if the laws of mathematics would trump the laws of Australia.

“The laws of Australia prevail in Australia, I can assure you of that,” Turnbull told ZDNet. “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.”

During his media rounds, Turnbull made sure he let Australia know his intention was to protect the nation against terrorism and to protect the community from criminal rings such as those involved in paedophilia, rather than nutting out the technical specs of the laws modelled on the UK’s snoopers’ charter.

In June, then-Australian Minister for Law Enforcement and Cyber Security Angus Taylor repeated the government’s denial that they’re after a back door, adding some curious extras.

“Now it’s sometimes argued that agencies should have privileged access to what’s known as a ‘golden key’ — a special key where you can open up, you can decrypt the data. The tech sector has pushed back hard against this, saying that’s creating so-called ‘backdoors’ or threats to the security of their devices and systems,” he said.

“In the coming weeks, we’ll begin consultation on new legislation that will modernise our telecommunications intercept and search warrant powers. [This legislation] will not create ‘backdoors’. This government is committed to no ‘backdoors’. It isn’t necessary to give law enforcement agencies access to a decryption key otherwise under the sole control of a user.

“We simply don’t need to weaken encryption in order to get what we need.”

But the kicker in Taylor’s speech was his reference to the country’s absurd practice of stopping asylum seekers from entering Australia.

“Practically speaking, ‘stopping the bots’ is every bit as important to Australians as ‘stopping the boats’,” he said.

Solomon, alongside many speakers at the Human Rights & Technology conference, said breaking encryption and introducing any kind of backdoor isn’t the right approach.

“There’s a wholesale attack on encryption in this country; encryption is at the absolute centre of open internet and is required in order for us to have rights respecting technology,” Solomon said.

Related Stories:

Australia called out as willing to undermine human rights for digital agenda (ZDNet)

A report from AccessNow has asked Australia to change its course and lead the way in serving as a champion for human rights instead of against.

Biometric Bills at ‘high risk’ of breaching human rights: Commissioner (ZDNet)

Australia’s Human Rights Commissioner has said the identity-matching Bills need clearer safeguards to remain consistent with international human rights obligations, while the Law Council of Australia has questioned whether the biometric data would be exempt from mandatory data breach reporting rules.

The Australian government and the loose definition of IT projects ‘working well’ (ZDNet)

Straight-faced, a Department of Human Services representative told a Senate committee its data-matching ‘robodebt’ project went well, because it produced savings.

Australia’s diplomatic challenge is to avoid a cyber arms race (ZDNet)

Belligerent? Paternalistic? Neo-colonial? Australia’s assertive new cyber engagement strategy could look very different through our neighbours’ eyes.

AWS facial recognition tool for police highlights controversy of AI in certain markets (TechRepublic)

Amazon’s Rekognition is being tailored to law enforcement use cases for real-time identification, prompting backlash from the ACLU.

4 tips for developing better data algorithms (TechRepublic)

Algorithm quality can affect whether your company makes the right or wrong decisions. Here are some ways to make your business smarter.

Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

More and more organizations are finding ways to use artificial intelligence to power their digital transformation efforts. This ebook looks at the potential benefits and risks of AI technologies, as well as their impact on business, culture, the economy, and the employment landscape.