New York attorney general names duo to investigate Cuomo sexual harassment claims
New York’s attorney general, Letitia James, on Monday appointed a former federal prosecutor and an employment lawyer to investigate allegations that the state governor, Andrew Cuomo, sexually harassed female aides.
Joon Kim, who was the acting US attorney in Manhattan during different times in 2017 and 2018, will join the employment lawyer, Anne Clark, in conducting the investigation, the attorney general’s office said.
James said the pair were “independent, legal experts who have decades of experience conducting investigations and fighting to uphold the rule of law”.
“There is no question that they both have the knowledge and background necessary to lead this investigation and provide New Yorkers with the answers they deserve,” she said in a statement.
The appointments came as New York lawmakers were privately debating whether to join calls for Cuomo to resign from office, or urge patience while the investigation is ongoing.
A group of 21 women in the state assembly released a statement on Monday asking that James be given time to complete her assessment.
Those lawmakers, who include the
No 2 Democrat in the state assembly, the majority leader, Crystal PeopleStokes, began working on the statement on Sunday night after the state senate’s top leader, Andrea StewartCousins, called on Cuomo to resign.
“We continue to support our attorney general, the first woman, and the first African American woman to be elected to this position, as she launches this investigation,” it said.
“We request that she be allowed the appropriate time to complete her investigation rather than undermine her role and responsibility as the chief law enforcement officer of the state of New York.”
Cuomo, appeared with Black clergy members on Monday at a vaccination site in New York City.
The event was closed to reporters, but Cuomo said on Sunday he had no intention of resigning and believes he can continue to govern.
Several women, including three former members of Cuomo’s staff, have accused him of making inappropriate comments about their appearance, asking questions about their sex life and, in some cases, giving them uncomfortable hugs or unwanted kisses.
The governor has denied touching anyone inappropriately, and said some of the allegations are false.
But he has acknowledged, and apologized for, engaging in “banter” in the office that some women interpreted as flirting. Cuomo has said he didn’t realize at the time that his actions were harmful.
James, a Democrat, has said she will hire an outside law firm to investigate Cuomo’s workplace conduct.
Separately, Cuomo is under fire for withholding data from the public and from state lawmakers on Covid-19 deaths among nursing home patients. Critics say they suspect the statistics were withheld to protect the Democrat’s image – a charge the governor has denied.
The assembly speaker, Carl Heastie, whose support would be vital for any effort to impeach Cuomo, stopped short of asking him to resign on Sunday, but said: “I think it is time for the governor to seriously consider whether he can effectively meet the needs of the people of New York.”
As artificial intelligence systems go, it is pretty smart: show Clip a picture of an apple and it can recognise that it is looking at a fruit. It can even tell you which one, and sometimes go as far as differentiating between varieties.
But even cleverest AI can be fooled with the simplest of hacks. If you write out the word “iPod” on a sticky label and paste it over the apple, Clip does something odd: it decides, with near certainty, that it is looking at a mid-00s piece of consumer electronics. In another test, pasting dollar signs over a picture of a dog caused it to be recognised as a piggy bank.
OpenAI, the machine learning research organisation that created Clip, calls this weakness a “typographic attack”. “We believe attacks such as those described above are far from simply an academic concern,” the organisation said in a paper published this week. “By exploiting the model’s ability to read text robustly, we find that even photographs of handwritten text can often fool the model. This attack works in the wild … but it requires no more technology than pen and paper.”
Like GPT-3, the last AI system made by the lab to hit the front pages, Clip is more a proof of concept than a commercial product. But both have made huge advances in what was thought possible in their domains: GPT-3 famously wrote a Guardian comment piece last year, while Clip has shown an ability to recognise the real world better than almost all similar approaches.
While the lab’s latest discovery raises the prospect of fooling AI systems with nothing more complex than a T-shirt, OpenAI says the weakness is a reflection of some underlying strengths of its image recognition system. Unlike older AIs, Clip is capable of thinking about objects not just on a visual level, but also in a more “conceptual” way. That means, for instance, that it can understand that a photo of Spider-man, a stylised drawing of the superhero, or even the word “spider” all refer to the same basic thing – but also that it can sometimes fail to recognise the important differences between those categories.
“We discover that the highest layers of Clip organise images as a loose semantic collection of ideas,” OpenAI says, “providing a simple explanation for both the model’s versatility and the representation’s compactness”. In other words, just like how human brains are thought to work, the AI thinks about the world in terms of ideas and con
cepts, rather than purely visual structures.
But that shorthand can also lead to problems, of which “typographic attacks” are just the top level. The “Spider-man neuron” in the neural network can be shown to respond to the collection of ideas relating to Spiderman and spiders, for instance; but other parts of the network group together concepts that may be better separated out.
“We have observed, for example, a ‘Middle East’ neuron with an association with terrorism,” OpenAI writes, “and an ‘immigration’ neuron that responds to Latin America. We have even found a neuron that fires for both darkskinned people and gorillas, mirroring earlier photo tagging incidents in other models we consider unacceptable.”
As far back as 2015, Google had to apologise for automatically tagging images of black people as “gorillas”. In 2018, it emerged the search engine had never actually solved the underlying issues with its AI that had led to that error: instead, it had simply manually intervened to prevent it ever tagging anything as a gorilla, no matter how accurate, or not, the tag was.