Tag: explorative

  • Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said

    Tech behemoth OpenAI has touted its AI-powered transcription tool Whisper as having “human-level robustness and accuracy.”

    But Whisper has one major flaw: It’s prone to creating chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the made-up lyrics – known in the industry as hallucinations – could include racial slurs, violent rhetoric and even imagined medical treatments.

    Experts said such fictions are problematic because Whisper is being used in a host of industries around the world to translate and transcribe interviews, generate text in popular consumer technologies and create captions for videos.

    Tech behemoth OpenAI has touted its AI-powered transcription tool Whisper as having “human-level robustness and accuracy.” AP

    More troubling, they said, is a rush by medical centers to use Whisper-based tools to transcribe patient consultations with doctors, despite OpenAI’s warnings that the tool should not be used in “high-risk areas.”

    The full extent of the problem is hard to discern, but researchers and engineers said they often encounter Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in 8 out of every 10 audio transcriptions he inspected before he began trying to improve the model.

    A machine learning engineer said he initially detected hallucinations in about half of the 100-plus hours of Whisper transcripts he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.

    The problems persist even on short, well-recorded audio samples. A recent study by computer scientists found 187 hallucinations in more than 13,000 clear audio fragments they examined.

    But Whisper has one major flaw: It’s prone to creating chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. AP

    This trend would lead to tens of thousands of erroneous transcriptions over millions of records, the researchers said.

    Such mistakes can have “really serious consequences,” especially in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.

    “No one wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There has to be a higher bar.”

    Experts said such fabrications are problematic because Whisper is being used in a host of industries around the world to generate text in popular consumer technologies and create captions for videos. AP

    Whisper is also used to create closed captions for the deaf and hard of hearing – a population at particular risk for erroneous transcriptions.

    That’s because deaf and hard-of-hearing people have no way of identifying fiction is “hidden among all this other text,” said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.

    OpenAI sought to address the problem

    The proliferation of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At the very least, they said, OpenAI should address the flaw.

    “This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who left OpenAI in February over concerns with the company’s direction. “It’s problematic if you set this up and people are sure what it can do and integrate it into all these other systems.”

    An OpenAI spokesperson said the company is constantly studying how to reduce hallucinations and praised the researchers’ findings, adding that OpenAI incorporates feedback into model updates.

    While most developers assume transcription tools misspell words or make other mistakes, engineers and researchers said they’ve never seen another AI-powered transcription tool hallucinate as much as Whisper.

    Whisper hallucinations

    The tool is integrated into several versions of OpenAI’s flagship chatbot, ChatGPT, and is an integrated offering on Oracle and Microsoft’s cloud computing platforms, which serve thousands of companies worldwide. It is also used to transcribe and translate text in many languages.

    Professors Allison Koenecke, from Cornell University, and Mona Sloane from the University of Virginia, examined thousands of short snippets they received from TalkBank.
    AP

    In the last month alone, a recent version of Whisper was downloaded over 4.2 million times by the open source artificial intelligence platform HuggingFace. Sanchit Gandhi, a machine learning engineer there, said Whisper is the most popular open-source speech recognition model and is integrated into everything from call centers to voice assistants.

    Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short excerpts they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of hallucinations were harmful or distressing because the speaker could be misinterpreted or misinterpreted.

    In one example they discovered, a speaker said: “He, the boy, was going to take the umbrella, I’m not sure exactly.”

    The research determined that nearly 40% of hallucinations were harmful or disturbing because the speaker could be misinterpreted or misinterpreted. AP

    But the transcription software added: “He took a large part of a cross, a small, small part … I’m sure he didn’t have a terror knife, so he killed a number of people.”

    A speaker on another recording described “two more girls and a lady”. Whisper made up additional comments on race, adding “two other girls and a lady, um, who were black.”

    In a third transcript, Whisper invented a non-existent drug called “hyperactivated antibiotics”.

    Researchers aren’t sure why Whisper and similar tools hallucinate, but software developers said the hallucinations tend to occur between pauses, background sounds or music playing.

    OpenAI recommended in its online findings against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in results.”

    Transcription of appointments with the doctor

    That warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor visits to free up medical providers to spend less time taking notes or written reports.

    Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital in Los Angeles, have begun using a Whisper-based tool built by Nabla, which has offices in France and the US.

    That tool was well-suited to medical language to transcribe and summarize patient interactions, said Nabla’s chief technology officer, Martin Raison.

    Company officials said they are aware that Whisper can hallucinate and are mitigating the problem.

    It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool deletes the original audio for “data security reasons,” Raison said.

    Nabla said the tool has been used to transcribe about 7 million medical visits.

    Saunders, the former OpenAI engineer, said the deletion of the original audio can be worrisome if transcripts aren’t double-checked or clinicians can’t access the recording to verify they’re accurate.

    “You can’t catch mistakes if you take the truth out of the ground,” he said.

    Nabla said no model is perfect and that their model currently requires medical providers to quickly edit and approve transcribed notes, but that could change.

    Privacy concerns

    Because patients’ appointments with their doctors are confidential, it’s hard to know how the AI-generated transcripts are affecting them.

    Koenecke is also the author of a recent study that found hallucinations in a speech-to-text transcription tool.
    AP

    A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year and refused to sign a form from the health network that required her permission to share audio of consulting with vendors that included Microsoft Azure. the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn’t want such intimate medical conversations shared with tech companies, she said.

    “The release was very specific that for-profit companies would be eligible to have this,” said Bauer-Kahan, a Democrat who represents a swath of San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’”

    John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.

    #Researchers #AIpowered #transcription #tool #hospitals #invents
    Image Source : nypost.com

  • Realistic AI photos reveal what typical cheaters look like – is that you?

    Realistic AI photos reveal what typical cheaters look like – is that you?

    Bald-faced liars are also more likely to be adulterers.

    Are you bald with a big nose in your 40s? You’re more likely to cheat on your partner, according to an AI-generated profile of what a typical philanderer looks like.

    “We shed light on the physical traits associated with those who are prone to cheating,” said Rosie Maskel, a senior marketing executive at online casino MrQ, which conducted the scandalous study, Kennedy News reported.


    The typical cheater, according to the study.
    What the typical male cheater looks like, according to an AI-generated description based on a study by online casino MrQ. Kennedy News and Media

    The digital betting site reportedly surveyed 2,000 Britons – many of whom had been cheated on in the boudoir – to find out what attributes cheaters had in common.

    They then fed the results to an AI-powered image generator to create a “photo-fit” description of the average fraudster.

    The artist’s description of AI showed a man in his 40s with blue-gray eyes, sparse or no hair, and frown lines. Throw in small lips and bigger schnoz, and you’ve got the poster child for someone who sleeps next to their spouse, according to the study.

    Their female counterpart, according to the illustration, was in her early fifties and had dark hair with a small nose and medium-sized nostrils.

    Both the male and female scammers were described as having a thin build and “staring eyes”.

    “Our research showed that just under half (41%) [of people] are familiar with this painful betrayal, so it may be that many identify with the characteristics in these images,” said Maskel.


    An AI generated image of a typical female con artist.
    According to the study, the typical female cheater is a dark-haired woman in her 50s. “We’ve shed light on the physical traits associated with those who are prone to cheating,” said Rosie Maskel, a senior marketing executive at online casino MrQ, which conducted the scandalous study. Kennedy News and Media

    However, perhaps the profile of the male cheater was more accurate given the larger sample size.

    The study found that men are far more likely to cheat than women, with 35% of men admitting to having cheated at least once compared to just 24% of women.

    Women are more likely to stay with their adulterous soul mates after catching them astray. More than a fifth (22%) stayed with their unfaithful partner for at least two more years, while only one in ten (13%) of scorned boys did the same.

    Meanwhile, a whopping 2% of people married their boss after discovering they were sleeping around.

    As the description of the typical unfaithful man attests, the drama of adultery does not subside in middle age.

    Those aged 45-54 are among the most likely to have cheated at least once. More than one in three (35%) admitted to cheating on their partners, while 54% of 45-54 year olds revealed that they had either been the victim or perpetrator of infidelity.

    Coincidentally, the study found that cheating generally peaks around the fall, so lovebirds should stay alert now — especially if their other half possesses the aforementioned physical characteristics.

    Of course, the odds of scoring an extramarital booty are more than skin deep, according to the marketing executive.

    “Obviously, it’s important to note that these are based on statistical analysis and will not apply to all individuals,” Maskel said. “People’s behavior is determined by their decisions and actions, not by how they look.”

    Professional “honey trapper” Madeline Smith recently discovered some tell-tale behaviors that could indicate a man is having sex on the side, including hiding phones, neglecting to include photos of his significant other on social media and using Snapchat.

    #Realistic #photos #reveal #typical #cheaters
    Image Source : nypost.com

  • $3 ‘coffee filter’ device can detect colorectal cancer in less than an hour

    $3 ‘coffee filter’ device can detect colorectal cancer in less than an hour

    A remarkable new device promises to make cancer detection cheaper, faster and more accessible than ever before.

    As described in an upcoming issue of Lab on a Chip, researchers at the University of Texas at El Paso (UTEP) say they have developed a system that can detect cancer markers in the blood with greater responsiveness than current diagnostic methods.

    Known as a paper-in-polymer-pond (PiPP) device, the new test platform combines paper similar to that found in coffee filters with a plastic frame.

    The new test promises to provide accurate results using a single drop of blood. bunyarit – stock.adobe.com

    Using a drop of blood from a patient, PiPP targets two cancer markers: carcinoembryonic antigen (CEA), which is associated with colorectal cancer, and prostate-specific antigen (PSA), which indicates prostate cancer.

    CEA and PSA appear in the blood in the early stages of cancer, making them historically difficult to detect. However, the new device can pick up these markers at low concentrations, making them roughly 10 times more sensitive than test kits on the market.

    “Our new biochip device is low-cost—just a few dollars—and sensitive, which will make accurate disease diagnosis accessible to anyone, rich or poor,” lead author XiuJun (James) Li , a professor of chemistry and biochemistry at UTEP. said in a statement.

    “It’s portable, fast and eliminates the need for specialized instruments,” added Li.

    This promising development comes on the heels of a surprising new study that suggests Gen X and millennial Americans are at higher risk of developing 17 cancers compared to older generations.

    Known as a paper-in-polymer-pond (PiPP) device, the new test platform combines paper similar to that found in coffee filters with a plastic frame. Lab on a chip

    Colorectal cancer, which the PiPP device may be able to detect in its early stages, has steadily increased in adults under 50 since the 1990s. Young people who develop colon cancer tend to be diagnosed at later stages of the devastating disease — and have more aggressive types of tumors — but PiPP’s promise of earlier detection could equate to lifesaving intervention.

    Prostate cancer is similarly driving the current cancer epidemic, with 10% of new diagnoses in the US occurring in men under 55.

    Prostate cancer deaths are expected to increase by 136% from 2022 to 2050 worldwide. As with all cancers, early detection is key – and the new PiPP device could prove to be a game changer.

    And the device not only provides early diagnoses, but quick results. Compared to the 16 hours that traditional testing takes, PiPP provides results in just one hour, and those results can be read with a smartphone.

    Colorectal cancer, which the PiPP device may be able to detect in its early stages, has steadily increased in adults under 50. AC – stock.adobe.com

    Researchers note that developing countries often lack access to cancer screening methods and resources such as laboratory equipment and providers, a limitation that makes early detection difficult and mortality rates higher.

    However, the PiPP device – which is affordable, reusable and user-friendly – ​​helps level the playing field for early diagnosis.

    Robert Kirken, dean of the College of Science at UTEP, said the innovation “significantly improves point-of-care diagnostics by reducing detection time and the need for costly instrumentation.”

    “This makes it ideal for resource-limited settings, which will improve early diagnosis and lead to better cancer outcomes. I look forward to seeing what this innovation leads to,” he added.

    While the potential is promising, it may be several years before the PiPP device is commercially available.

    The prototype will be tested for efficacy and safety through clinical trials and will eventually seek approval from the US Food and Drug Administration before being made available to healthcare providers.

    #coffee #filter #device #detect #colorectal #cancer #hour
    Image Source : nypost.com