Home » Posts tagged 'face recognition'

Tag Archives: face recognition

San Francisco's Killer Police Robots Threaten the Metropolis's Most Weak

One impact of AB 481 is so as to add native oversight to {hardware} like the sort obtained via a US Division of Protection program that sends billions of {dollars} of navy tools comparable to armored autos and ammunition to native police departments. Gear from this system was used towards protesters within the wake of the police killings of Michael Brown in Ferguson, Missouri, in 2014 and George Floyd in Minneapolis in 2020.

Earlier this yr, San Francisco supervisor Aaron Peskin amended San Francisco’s draft coverage for military-grade police tools to explicitly forbid use of robots to deploy pressure towards any individual. However an modification proposed by SFPD this month argued that police wanted to be free to make use of robotic pressure, as a result of its officers should be prepared to reply to incidents during which a number of individuals had been killed. “In some instances, lethal pressure towards a risk is the one choice to mitigate these mass casualties,” the modification mentioned.

Forward of yesterday’s vote, Brian Cox, director of the Integrity Unit on the San Francisco Public Defender’s Workplace, referred to as the change antithetical to the progressive values the town has lengthy stood for and urged supervisors to reject SFPD’s proposal. “It is a false selection, predicated on fearmongering and a want to write down their very own guidelines,” he mentioned in a letter to the board of supervisors.

Cox mentioned deadly robots on SF streets might trigger nice hurt, worsened by “SFPD’s lengthy historical past of utilizing extreme pressure—notably towards individuals of shade.” The American Civil Liberties Union, the Digital Frontier Basis, and the Attorneys Committee for Civil Rights have additionally voiced opposition to the coverage.

The San Francisco Police Division has disclosed that it has 17 robots, although solely 12 are operational. They embody search-and-rescue robots designed to be used after a pure catastrophe like an earthquake, but in addition fashions that may be geared up with a shotgun, explosives, or pepper spray emitter.

Supervisor Aaron Peskin referred to the potential for police use of explosives to go improper in the course of the debate forward of yesterday’s vote. Throughout a 1985 standoff in Philadelphia, police dropped explosives from a helicopter on a home, inflicting a fireplace that killed 11 individuals and destroyed 61 houses.

Peskin referred to as that one of the atrocious and unlawful incidents within the historical past of US regulation enforcement however mentioned that the very fact nothing comparable has ever occurred in San Francisco gave him a measure of consolation. He in the end voted to permit SFPD to make use of lethal robots. However he added the restriction that solely the chief of police, assistant chief of operations, or deputy chief of particular operations might authorize use of lethal pressure with a robotic, together with language that urges consideration of de-escalation.

Granting approval to killer robots is the newest twist in a collection of legal guidelines on policing expertise from the tech hub that’s San Francisco. After passing a regulation rejecting police use of Tasers in 2018, and offering oversight of surveillance expertise and barring use of face recognition in 2019, metropolis leaders in September gave police entry to non-public safety digital camera footage.

Supervisor Dean Preston referred to San Francisco’s inconsistent document on police expertise in his dissent yesterday. “If police shouldn’t be trusted with Tasers, they certain as hell shouldn’t be trusted with killer robots,” he mentioned. “We’ve a police pressure, not a military.”

San Francisco’s new coverage comes at a time police entry to robots is increasing, and people robots have gotten extra succesful. Most present police robots transfer slowly on caterpillar tracks, however police forces in New York and Germany are starting to make use of legged robots just like the nimble quadruped Spot Mini.

Axon, producer of the Taser, has proposed including the weapon to drones to cease mass shootings. And in China, researchers are engaged on quadrupeds that work in tandem with tiny drones to chase down suspects.

Boston Dynamics, a pioneer of legged robots, and 5 different robotics producers printed an open letter in October objecting to the weaponization of their robots. Signatories mentioned they felt a renewed sense of urgency to state their place attributable to “a small quantity of people that have visibly publicized their makeshift efforts to weaponize commercially obtainable robots.” However as robotics turns into extra superior and cheaper, there are many opponents with out such reservations. Ghost Robotics, a Pennsylvania firm in pilot initiatives with the US navy and Division of Homeland Safety on the US-Mexico border, permits prospects to mount weapons on its legged robots.

Hinge Will Attempt to Thwart Scammers With Video Verification

Match Group, which operates one of many world’s largest portfolios of relationship apps, will quickly add a brand new profile verification function to its widespread relationship app Hinge. The function is an element of a bigger effort to crack down on scammers who use faux photographs and purport to be folks they’re not on the app, usually with the intent of ultimately scheming romantic conquests out of cash.

Jarryd Boyd, director of brand name communications for Hinge, mentioned in a written assertion that Hinge will start rolling out the function, named Selfie Verification, subsequent month. Hinge will ask customers to take a video selfie throughout the app so as to verify they’re an actual individual and never a digital faux. Match Group then plans to make use of a mix of machine studying know-how and human moderators to “examine facial geometries from the video selfie to photographs on the consumer’s profile,” Boyd mentioned. As soon as the video is confirmed as genuine, a consumer will get a “Verified” badge on their Hinge profile.

The transfer comes after a current WIRED story highlighting the proliferation of pretend accounts on the Hinge relationship app. These faux profiles are sometimes peppered with shiny photographs of engaging folks, although there’s one thing off-putting about their perfection. The individual has usually “simply joined” the relationship app. Their descriptions of themselves or responses to prompts are nonsensical, an indication that an individual could also be utilizing a translation app to attempt to join with somebody of their native language. And in lots of situations, the individual on the opposite finish of the fraudulent profile will urge their match to maneuver the dialog off of the app—a technique that enables them to take care of a dialogue even when the fraudster is booted off of Hinge.

By December, Selfie Verification needs to be accessible to all Hinge customers worldwide, which incorporates folks within the US, UK, Canada, India, Australia, Germany, France, and greater than a dozen different nations.

“As romance scammers discover new methods to defraud folks, we’re dedicated to investing in new updates and applied sciences that forestall hurt to our daters,” Boyd mentioned.

Hinge is one among many relationship apps owned by Match Group, and it is not the primary to make use of a face recognition software to attempt to spot fakes. Previous to this, Tinder and Loads of Fish had photograph verification instruments. In August a spokeswoman from Match Group instructed WIRED that photographic verification could be coming to Hinge, OKCupid, and Match.com “within the coming months.”

Match Group says Hinge customers may have the choice to confirm their profiles with a video selfie when the function launches, and that it will not be a requirement.

The corporate has additionally emphasised that it has a Belief & Security crew consisting of greater than 450 workers who work throughout the corporate’s many relationship apps, and that final 12 months Match Group invested greater than $125 million to construct new know-how “to assist make relationship protected.” 4 years in the past, it created an advisory council to give you insurance policies to stop harassment, sexual assault, and intercourse trafficking.

It is Actually Me

The corporate’s rollout of video verification instruments on Hinge are lengthy overdue—and is probably not foolproof. Maggie Oates, an impartial privateness and safety researcher who has additionally programmed a sport about intercourse work and privateness known as OnlyBans, says in an electronic mail that she strongly believes biometric authentication needs to be non-compulsory and incentivized in relationship apps, however not required. A multi-pronged verification method is likely to be more practical, Oates says, with the additional benefit of giving customers choices. “Not everyone seems to be comfy with biometrics. Not everybody has a driver’s license. On-line identification verification is a extremely arduous downside.”

And she or he believes that relying solely on facial recognition know-how for profile verification will solely final for thus lengthy.