I worried my science wasn’t making an impact. So I ran for elected office

From ScienceMag:

“Have you considered running for elected office?” My friend’s question didn’t come out of nowhere. I was active in my community as a volunteer, especially in environmental and social justice causes, and I regularly met elected officials and advocated for issues I cared about. But the question still took me by surprise. As a tenured professor and dean, my academic identity was firmly established. Was politics even something that academics did?

By the usual measures, I was successful. I had good funding, a solid publication record, and I had been promoted to serve as dean of engineering at the liberal arts college where I work in New York state. I enjoyed my leadership role and my research. But I did have reason to think about moving in another direction.

My most cited paper was a nice article with some juicy math—3D vector calculus in non-Cartesian coordinates!—but the work had little relevance to everyday issues. That always bothered me.

So did academics’ reluctance to speak out about policy. I had noticed that even when scientific papers did have findings worth sharing with the public or government officials, they tended to bury phrases like “We recommend that policymakers do X” near the end. There was an unstated assumption that a scientist’s role is to inform policy, not help enact it. That stuff was done by other people.

quotation mark
And as a researcher who’s had his share of scientific disagreements with other researchers, I have been able to work with others whose viewpoints differ from mine—an approach that is needed in these times of intense political polarization.
  • Ashok Ramasubramanian
  • Templeton Institute at Union College

When I turned 50, I also started to ask myself uncomfortable questions about my own future, such as, “What can I do with the time that is left to me?” I wondered whether I would have regrets if I did not serve my community more directly. After fulfilling my dean duties, I only had so much time left in the day. I realized I could not be an active researcher and engage in public service. To make it work, I would have to give up research.

I had just completed work on a major federal grant. So, I began to think the time might be right to consider running for a town council position. When the idea was only a nascent possibility, I broached it with my boss, our college’s vice president of academic affairs, and was pleased to discover that she was supportive. Our institution encourages community service and outside-the-box thinking, and administrators are generally happy for faculty to branch out. The idea is to help model lifelong learning—a value we work to instill in our students.

After much thought, I decided to take the leap. I closed down my lab space and liquidated all my research assets, turning them over to more junior faculty members, and began to spend my nondean hours going door to door and talking with voters.

It was a new world, and I had to learn a lot of new things quickly. My experience dabbling in research fields outside my own was helpful as I tackled activities that were new to me like fundraising, campaign finance reporting, and social media outreach. But I also leaned heavily into my favorite philosophy: “Fake it till you make it.”

I was pleased to find that voters in my community appreciated my candidacy. Being a scientist and an academic helped me stand out as a unique and qualified individual. And after I won my election and was appointed to the town council in January, I have tried to use skills I gained as a scientist to help my community. For instance, my experience writing research proposals is helpful when applying for grants aimed at infrastructure maintenance and green space preservation. And as a researcher who’s had his share of scientific disagreements with other researchers, I have been able to work with others whose viewpoints differ from mine—an approach that is needed in these times of intense political polarization. The time commitment has also been manageable, as council meetings are held in the evenings after normal work hours.

I miss many aspects of research, especially spending quiet time in my lab and mentoring students. But the experiences of running for office and serving the public as an elected official have been equally rewarding and fulfilling. I am not sure what my political future holds, but for now I am having quite a bit of fun serving my community in an official capacity. I encourage other scientists to ask hard questions about new ways to put their skills to work, especially in the second half of life.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More

Cite unseen: when AI hallucinates scientific articles

From ScienceMag:

Experimental Error logo
Experimental Error is a column about the quirky, comical, and sometimes bizarre world of scientific training and careers, written by scientist and comedian Adam Ruben. Barmaleeva/Shutterstock, adapted by C. Aycock/Science

Meredith Cimmino had been careful to avoid artificial intelligence (AI) tools when writing her dissertation. But when her Ph.D. committee at Rutgers University recommended she check for any new publications in the field, just to make sure her references were up to date, she thought it wouldn’t hurt to ask ChatGPT a quick question. “Everybody’s been talking about using AI to look things up,” Cimmino wrote to me, “so I’m like, ‘Oh let me just go look.’”

Sure enough, the AI tool immediately spat out a list of articles she had never heard of (and, if it operated the way I’ve seen ChatGPT operate, it probably started with an off-putting compliment like, “That sounds like a dynamic research field!”) At first, Cimmino was ecstatic. Not only could she update her paper with these references, but she could also bolster her conclusions. The titles and AI-generated summaries of the papers’ findings seemed to strongly support her own.

But the deeper she dug, the more she questioned the list ChatGPT had given her. First and foremost, she told me, the mere existence of this plethora of supportive studies sounded “too good to be true”—because, as a Ph.D. student who had been researching the field for years, why hadn’t she heard of any of the papers? “So, I go look up the studies,” she explained. “And they don’t exist.”

Cimmino’s experience is yet another instance of AI doing what’s sometimes called “hallucinating with confidence”—in other words, giving you a beautiful answer, presented with unassailable conviction, that has absolutely no factual basis. And although Cimmino thankfully dodged that bullet by fact-checking each real-sounding reference until she verified its nonexistence, plenty of researchers haven’t. The rise of AI has been accompanied by a raft of stories about scientists blindsided by requests for the full text of articles or textbook chapters they never wrote, or journals belatedly discovering one of their publications cites articles that, well, aren’ticles.

To be clear, these fake references are very, very convincing. They’re not like the agrammatical crypto phishing scams we’re all used to. (“The IRS hopes to giving your refund! Click this Belarussian website domain for money flavors!”) They use realistic names, real journal titles, plausible summaries, and they appear in response to your own highly specific question.

This is partly the fault of how AI operates. Under the hood, it doesn’t just search for the right answer to your query—it also asks, “What would an accurate and helpful response to this prompt look like?” Sometimes it decides it would look like a real accurate response. But sometimes it favors the “what would one look like” part of its algorithm, and then it gets to work generating references that resemble the sort of thing you’re hoping to find.

Just to see what would happen, I opened ChatGPT and referred it to this column, telling it to examine my back catalog of about 180 Experimental Error articles. Then I asked it to name five articles I’ve written about AI and give a short summary of each. I asked this question knowing full well that I’ve only written about AI once or twice, and a correct response would either be to point this out, or maybe to name a few columns I wrote that weren’t exactly about AI but maybe had AI-ish elements in them.

Nope. It just hallucinated.

First it cited an article correctly, a piece published in May 2025 about researchers asking AI to summarize scientific papers. But then it cited four more articles that never existed. Each article had a plausible title. One was called “Reviewer 3 Is Now a Neural Network.” Another promised that I had tackled the provocative question: “Should You Let AI Design Your Experiments?” But I never wrote these articles, and based on a Google search, neither did anyone else. The AI engine didn’t just misattribute someone else’s writing to me, it generated new article titles that no one wrote and swore they were mine.

ChatGPT even gave each article a lovely little (fake) summary. For example, under an article titled “Chatbots in the Lab: Helpful Assistant or Liability?” it commented, “Ruben reflects on the growing use of conversational AI tools by students and researchers—for coding, writing, and troubleshooting experiments.”

I know these articles don’t exist because I’m me. But unless the searcher independently tries to find them, how would they know the truth? Who in the world could be expected to know I’ve never written these articles when AI cites and summarizes them so convincingly?

I continued the conversation. “Adam Ruben never wrote articles 2-5 in that list,” I typed. “Did you hallucinate them?” The reply was very honest, in both a refreshing and terrifying way: “Yes—you’re right to call that out,” it began. “I did hallucinate articles 2-5 in my previous response.”

Then it described in detail why it may have hallucinated: “This is a classic hallucination pattern: I had one real anchor (the May 2025 AI article). I extrapolated similar-sounding topics consistent with his column. I failed to verify each item against a reliable source.”

Well, for goodness’ sake.

That’s the same problem some researchers have. And one might say any scientist who cites a paper they’ve never read deserves to be called out for fraud, or at least for their concerning lack of due diligence. But think about all the papers you’ve had your name on. Have you read every reference in those papers? When the first line of your article is “[Subject] has been extensively studied1-28,” have you read all 28 of those references? Your time is limited, articles are often behind paywalls, and lots of older work hasn’t been digitized. If reference No. 25 is a 60-year-old paper in a journal that your institution doesn’t subscribe to—but you’ve seen it listed in other papers as one of the seminal publications in your field, and you’ve read an abstract—would you really leave it out, and risk failing to pay tribute to something important? Or would you do what everyone else does, and keep it in?

Luckily, one solution is to use a tool we’ve already developed: our skepticism. Our assumption that information is likely wrong, until we see reasonable evidence otherwise, is part of what makes us successful as scientists. Now, we just need to apply it to citations as well.

And by “we,” I mean all of us: scientists writing papers, scientists reading papers, and even—and especially—the scientific journals that evaluate and publish our papers.

We need to do this to make sure our own work is sound. But we also need to ensure we’re not awarding these bogus references credibility. If Cimmino hadn’t tried to chase down the citations AI had recommended, she might have pasted them into her thesis—and then a future student, hoping to build on her research, would have had all the more reason to believe these articles, and their conclusions, were real.

Researchers are developing new tools to double check the veracity of citations as well. Publisher Elsevier, for example, now offers a program called LeapSpace that includes a “truth card” with each result to explain whether a reference supports, refutes, or is neutral about a conclusion. In other words, it fights the problems of AI by using … what we hope is better AI.

A few days after telling me her story, Cimmino sent another short message. She realized she had referred to AI throughout her story as “they,” and she asked me to please change “they” to “it.”

“I didn’t know it was making them up,” she wrote of the hallucinated citations. “I know AI is not real.”

I hope we all do. But it’s easy to forget, isn’t it?

Read More

How I got over my fear of teaching

From ScienceMag:

My heart raced as I walked into the classroom, where 200 curious medical students were waiting for me. While the technician fitted my microphone, I gripped the podium and scanned the sea of expectant faces. After years of turning down opportunities to teach, I’d finally agreed to give it a go—and I was terrified. I dove in and felt myself whizzing through my slides, trying to get through the material before my nerves got the best of me. After a few minutes, a student raised her hand and asked me to slow down. I felt my face go red—had I messed up my first ever lecture?

I never imagined I would find myself in front of a classroom. As an introvert and a nonnative English speaker, I found the prospect daunting. I’d see colleagues face hundreds of students and give fluent, engaging lectures and think I could never match up to them. Instead, I decided I’d stick to research, where I felt comfortable running experiments, applying for grants, and mentoring individual students.

A few years into my postdoc, however, my mentor asked whether I could teach her class while she sat on a grant-review panel. Out of respect for her, I said I’d give it a try, though I was nervous. I spent hours preparing, listening to recordings of her past lectures and cramming her slides with extra information, worried I’d forget what to say.

So I was embarrassed when, just minutes into the lecture I’d so meticulously prepared for, the student told me she was having trouble keeping up. I paused, took a breath, and adjusted my pace. And to my surprise, the energy in the room shifted. Students leaned in and asked questions, and I began to feel more of a connection to what I was teaching. I was back on track. By the end of the class, nothing had gone terribly wrong and I was relieved to have fulfilled my obligation to my mentor.

Later, when I watched the recording of the class, I could see my teaching get better as the lecture went on, and I began to get excited by the prospect of improving further. With my mentor’s support, I decided to take on more classes.

And so a single lecture grew into a regular commitment and eventually a responsibility I embraced. It took time and practice to become a confident, engaging teacher, but student feedback and teaching evaluations helped. After a student told me my slides had too much text, for instance, I redesigned them to include more visuals and fewer words, and found that this change helped make discussions more interactive. Eventually, I was offered a position designing courses as well as teaching them, something I had never anticipated in my career path.

At first, I worried teaching would distract me from the relentless demands of maintaining a funded research lab. But I actually found it sharpened my focus and transformed how I communicate science to colleagues and funders. Preparing lectures required me to revisit fundamentals I hadn’t thought about in years, keep up-to-date with new science, and learn to clearly explain complex ideas. In the lab and at conferences, I slowed down and focused on explaining concepts and protocols clearly, resulting in better discussions and more collaboration. I even secured a major grant—proof that clarity and connection matter as much in funding proposals as in classrooms—and my teaching experience helped me gain an earlier than anticipated promotion to my next faculty appointment.

Looking back, saying “yes” to teaching was one of the most transformative decisions of my career. It didn’t just make me a better educator; it made me a better scientist. For anyone nervous about the prospect of teaching, I can only recommend giving it a go. It’s common to worry about language fluency, feeling exposed in a room full of brilliant minds, or being pulled away from research duties. But anyone who thoroughly understands their subject can become a better communicator with practice and by refining their approach over time. Sometimes the most fulfilling academic life is not the one we first imagined, but the one we build through both intentional choices and unexpected experiences.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More

How opening up about being a cancer survivor reshaped my Ph.D. journey

From ScienceMag:

I was in the fourth year of my Ph.D. in tumor immunology when I gave a talk at a major international conference. I had rehearsed every slide, every transition, determined to present my results as a coherent scientific story. But near the end I paused and said something I had not practiced. “This research is personal; I’m not only a researcher, but also a survivor of childhood leukemia.” The words surprised me as soon as they left my mouth. I felt I had crossed an invisible professional line I had spent years trying not to approach.

I was diagnosed with acute lymphoblastic leukemia when I was 3 years old. My earliest memories are not of classrooms or playgrounds, but of hospital rooms and seemingly constant fatigue caused by the chemotherapy drugs.

With treatment, I eventually went into remission. But as I grew older and learned the biology of leukemia, one idea unsettled me: The immune system designed to protect me had failed. Cells had multiplied without restraint. Signals meant to maintain order had broken down. Biology became personal for me. I became obsessed with the questions of what cancer is and what survival means biologically. Pursuing science didn’t feel like a career choice; it felt like picking up an unfinished story.

Yet when I entered graduate school, I did not tell anyone about my history—not my lab mates, not even my adviser. I thought professionalism meant keeping my personal life separate from my scientific one. But that separation required constant vigilance. When conversations turned to hospital appointments, childhood, illness, or what had brought us to cancer research, I learned to redirect gently or stay quiet. I answered honestly, but never fully. I worried disclosure might affect how I was seen. Would colleagues doubt my stamina? Would mentors hesitate to invest in me? Would I always be “the survivor” instead of simply a scientist? Would people think I was leveraging sympathy to earn a place in science?

I left India to pursue research abroad, first in the United States, then Israel, and eventually the United Kingdom. In the lab, I felt capable. Outside it, I often felt uncertain. There were evenings alone in my apartment when the distance from home felt vast. In those moments, I sometimes thought about the child I once was, lying in a hospital bed, exhausted, dependent on treatments developed by researchers I would never meet, who had chosen to dedicate their life to understanding diseases like mine. Slowly, I began to realize I was becoming that researcher myself. That thought didn’t make the path any easier, but gave it meaning.

I spoke out at that conference because of a realization that had been slowly coalescing for years: I could no longer keep my personal history and my profession in separate compartments. I did not expect my revelation to alter anything beyond that room. But in the weeks that followed, I began to see that many of my fears had been unfounded. Colleagues did not question my professionalism; they understood my urgency, and our conversations deepened. A student confided that she had her own medical history she rarely mentioned. Later, after I’d become more accustomed to sharing my story, a young patient told me hearing my story made a scientific career feel imaginable.

The shift was internal as well. Previously, a negative result in the lab could send me spiraling into self-doubt. Now, the setbacks are still frustrating, but they no longer feel existential. I remind myself I’ve already survived something far less predictable than an assay that did not work. Flawed experiments have become part of the process, not a measure of my worth.

That day at the podium the words arrived before I fully understood why. Only later did I recognize that it wasn’t my past that had weighed on me, but rather the effort of keeping it separate. Being a survivor doesn’t make me a better scientist, but it shapes how I think about my science. It gives context to long hours and the slow pace of discovery. My personal story has become part of my identity as a scientist, not as a credential, but as a reminder of why the questions matter, and why I chose to ask them.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More

Scientific conferences can be a bore. Can jokes liven them up?

From ScienceMag:

It is a truth universally acknowledged that any scientific conference, no matter how fascinating, will become a snoozefest—usually during the time slot just before lunch.

But during one such conference a few years ago, Stefano Mammola, an ecologist at the Italian National Research Council, serendipitously found a loophole to this iron law. After sitting through several dull talks, he started chatting with fellow attendee Victoria Stout, an environmental scientist at the University of Colorado Boulder who moonlighted as a stand-up comedian. They quickly realized they had both made the same observation: Whenever a speaker cracked a joke, the audience instantly became more alert and engaged, the speaker appeared more approachable, and the talk itself became more memorable. “When somebody uses humor in an effective way, I recall the information much better in the future,” Mammola explains.

The pair realized they had the makings of a research project on their hands: a comprehensive analysis of how scientists use humor as they relay their findings. “Scientists attend many conferences,” Mammola says. “Why not collect some data?” Over the next few years, the two researchers—along with a growing number of interested collaborators—attended 14 biology-related conferences, collecting data on the use of humor across 531 talks.

As the team reports today in the Proceedings of the Royal Society B, scientists take themselves very seriously indeed: Most presentations contained no jokes at all or just a few. When they did occur, jokes tended to cluster at the beginning and end of presentations, and the majority either fell flat or elicited only polite chuckles—although the authors noted a bump in successful jokes midway through talks. Men speakers told more jokes, and jokes from men and native English speakers tended to get more laughs.

Mammola spoke to Science about the findings and about the potential of humor to improve science communication. The interview has been edited for brevity and clarity.

Q: How did you decide what counted as a successful joke?

A: We really wanted to capture any deliberate attempt to make people laugh, whether it was orally delivered, a gesture, or visually depicted—a meme in a slide, for example. The last two categories were more obvious, while the first was a bit more subtle. But when I was sitting together with Victoria Stout and trying to score independently, we realized that we mostly agreed when somebody was attempting a joke.

We also assessed joke success, which was not easy to standardize. We used this three-level breakdown, where if nobody was laughing or just a few people, that’s low success. Medium success is more or less half the room, and high success is when more or less the whole room starts laughing.

Q: Which jokes were the most successful?

A: We didn’t find any pattern with respect to the type of joke. Of course, some types of jokes are more frequently used, but there was not a single type of joke that was more successful than others. Anything can make people laugh or not—it’s more the delivery and timing.

In general, jokes cluster at the beginning and end of presentations. As you start, you want to engage with the audience and connect with them, or maybe you’re a bit nervous, so you throw in a joke to break the ice. And then at the end, you relax a bit. Maybe you want to leave people with a good, lasting impression.

This pattern was quite ubiquitous among all groups, except students didn’t joke as much at the beginning of talks. This category is the one with less public speaking experience, so they may be more nervous. They had the same peak at the end, so as the presentation progressed, that nervousness probably went away, and they managed to catch up.

Stefano Mammola delivering a plenary lecture
Stefano Mammola delivering a characteristically animated plenary lecture at the 35th European Congress of Arachnology in Zadar, CroatiaTin Rožman & Iva Cupic

Q: You also saw a bump in successful jokes halfway through the presentation. Why do you think that is?

A: When you are speaking to an audience, you realize at some point that you’re starting to lose them. Their minds start to drift. It’s inevitable. And I think an experienced speaker, about halfway through a presentation, is able to throw in a very nice joke to re-engage the mind.

Q: What other trends did you observe?

A: Male speakers said more jokes on average, and jokes delivered by male and native English speakers tended to attract more medium- and high-intensity laughter. Are they really better at telling jokes, or is it that people are more willing to laugh? Joking is a risky activity, because we have this idea that scientists should be serious, and the ability to take risks is not equally distributed. It’s a powerful reminder that inequality in academia affects so many things. I think part of the solution is changing the status quo, discussing these issues, and exposing them.

Q: What do you want people to take away from this research?

A: One conclusion is just the importance of thinking about science communication. The information system in science is increasingly polluted. There are so many papers, so many conferences, so many talks, so much information. The ability to stand out from the crowd and effectively engage your audience is really important and something we need to actively think about.

Q: Of all the jokes you heard, do you have a particular favorite?

A: I cannot come up with a single joke, but what is most effective to me is when people use their bodies, when there’s something totally unexpected in the way the speaker moves. To me, these are the most successful. I also really got to dislike all the stereotypical jokes from speakers talking right after lunch. I guess it’s inevitable, but my data tells me it doesn’t work. You have to be creative.

Read More

Why I may ‘hire’ AI instead of a graduate student

From ScienceMag:

The other day, a new research idea struck me. The conceptual path was clear, but the execution would require real effort—synthesizing the literature, writing code, training models, performing statistical analysis. Just a few years ago, the next step would have been a no-brainer. I would recruit a graduate student into my lab and allow them to run with the project, providing guidance along the way. This time, an uncomfortable thought crept into my head: Should I just give these tasks to artificial intelligence (AI) rather than take a chance on a student?

I thought about the skills I had when I started graduate school more than a decade ago, and how much mentoring it took to get me where I am today. I had zero research experience when I emailed faculty to say I was interested in computer science Ph.D. programs. I did my basic due diligence, reading up on what they worked on. But sitting in their offices, listening to them talk about robotics, algorithms, and natural language processing, I had little to no clue what these concepts really meant.

One professor saw past my ignorance and agreed to take me on. I was incredibly grateful for the opportunity, but the first few months were a harsh reality check. I worked tirelessly—reading, summarizing, drafting ideas, and trying to make sense of it all. Yet, whenever I would present my work to my adviser, she would look at the nonsense I had presented, give me feedback, and send me back to start from scratch.

I thought about quitting. I felt I was constantly disappointing her. But she didn’t give up on me. Perhaps she believed in my potential, perhaps she saw I was doing the best I could, or perhaps she simply believed in the process of cultivating a scholar. It took a year or so of immense patience before I finally produced something we could build on. From there, I slowly transformed from a clueless novice into a junior colleague.

Years later, when I became a professor, I watched my own students struggle to make progress, just as I once had. My calendar filled up with meetings where my main job was to untangle their confusion. Eventually, though, the investment paid off, and I experienced the deep satisfaction of watching them transform into capable junior collaborators.

Now, AI has introduced a new option. It is certainly no extraordinary intellectual partner. But it can competently perform a lot of the work I need immediately; AI requires no ramp-ups, no meetings, and absolutely no emotional support. It is forcing a quiet, uncomfortable shift in my mindset.

The issue is not whether my students are valuable. In the long run, they are invaluable. The issue is that their value emerges slowly, whereas AI delivers immediate returns. I feel somewhat embarrassed to admit how tempting this is. In our culture, preferring an algorithm to a trainee feels like a betrayal of the academic mission.

Yet I see these calculations shaping the labs around me. Close colleagues are quietly refraining from taking on as many students as they used to. When they do take students, they are noticeably pickier.

My immediate instinct is to expect any student I recruit in this new environment to contribute at a much higher level from the outset. But to meet those elevated expectations, a student would likely rely heavily on the same AI tools I could turn to on my own. In the process, they may bypass the valuable experience of struggling through early tasks and learning from their mistakes. Students, I worry, could simply become an intermediary between the raw idea and the AI’s output.

For faculty, meanwhile, the pressure to produce remains relentless and the scientific pace is unforgiving, making a productive and frictionless AI even more tempting. The real danger I see is not that AI will entirely replace graduate students in the foreseeable future. It is that the default assumption that taking on students is simply part of any professor’s academic journey will quietly erode. In some cases, the most pragmatic solution could be to use an AI.

I’m not sure where that will leave students who start with no research experience. Personally, I am seriously tempted not to take a chance on a novice for my new project—which means today, I probably wouldn’t recruit my younger self.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More

Career effects of preprints get mixed reviews from biomedical researchers

From ScienceMag:

Nearly half of biomedical scientists worry preprints could spread shoddy research and misinformation, according to a new survey that could help explain why the life sciences have taken up the publishing practice more slowly than some other fields.

The survey is one of the largest to date to examine views of life sciences researchers on the practice of placing non–peer-reviewed manuscripts on public servers. The results, posted this week on the bioRxiv preprint server, also reveal that researchers on average do not believe publishing preprints enhances their career advancement. But many acknowledge benefits, such as spreading their findings more quickly than peer-review journals do and helping them find collaborators.

“This study makes a valuable contribution because it highlights the persistent tension between the benefits of rapid dissemination and the way research is evaluated,” says Jeremy Ng of University Hospital Tübingen, who studies health research methodology and was not involved in the new study. “Hiring, promotion, and funding decisions often still revolve around traditional journal publications.”

Biomedical preprints have become more common over the past decade and spiked during the COVID-19 pandemic. But previous studies have indicated larger shares of physicists and economists regularly post preprints than researchers in the life sciences. “We wanted to know what is stopping the [biomedical] community from adopting them to a larger extent,” says information scientist Chaoqun Ni of the University of Wisconsin–Madison, who led the new study.

The survey, completed by nearly 1800 biomedical researchers in the United States and Canada in early 2025, reveals substantial variations in the use of preprinting. Two-thirds of respondents read at least one preprint during the previous 2 years. Only about half of respondents had submitted one in that time span, and only one-third had cited a preprint. Junior scientists were more likely to embrace these practices.

Among respondents not reading or citing preprints, the most common reason was concerns over quality. Among all survey takers, 42% predicted a strongly negative effect on science from preprints that spread misinformation. In comments submitted with their survey answers, some respondents voiced strong reservations about the growing use of artificial intelligence (AI). “Professors [could] mass-generate preprints with AI,” wrote an unnamed associate professor. These could “crowd out legitimate scholars who are publishing at a slower pace because they are actually doing real studies and going through peer review.”

Worries about quality may come disproportionately from clinical researchers concerned that the lack of independent vetting of preprints could jeopardize patient safety, says Richard Sever, chief science and strategy officer at openRxiv, a nonprofit that operates the widely used bioRxiv and medRxiv preprint servers devoted to biomedical science. (The new study does not report responses separately by subdiscipline.)

But concerns over quality may be based more on researchers’ impressions than evidence, Sever says, noting that bioRxiv and medRxiv reject submissions that don’t use the scientific method or that pose obvious risks to public health. Preprinting a fraudulent manuscript exposes it to more scrutiny than if it appeared only in a journal, he adds. “If you get a reputation for being the person who always puts up stuff [on preprint servers] which doesn’t have complete data and is shoddy, then you’re done in academia.” What’s more, some 80% of preprints eventually appear in peer-reviewed journals. And despite their quality checks, journals publish problematic papers, he says.

Respondents to Ni’s survey also saw upsides to preprinting, with about half agreeing it can accelerate the dissemination of scientific findings compared with journals, where peer review can take months and much of the content is paywalled. That finding echoes results of a survey of 7000 bioRxiv and medRxiv users, conducted by openRxiv in 2023 and posted on 26 February, in which respondents praised fast dissemination of findings as a top benefit.

Only about 16% of respondents agreed strongly that preprints reduce the importance that professional evaluators—those who review grant applications or make hiring and tenure decisions—place on articles in subscription-based, selective, peer-reviewed journals. Shifting away from traditional journals is a goal that advocates of open science have touted and some funders have embraced. For example, in 2025 the Gates Foundation began requiring grantees to post as preprints all manuscripts that result from research it funds, and it stopped paying for researchers to publish their papers in journals that charge a fee to make papers free.

Still, many universities’ professional review procedures explicitly prefer or require peer-reviewed publications, Ni notes. More than 60% of the survey respondents involved in funding, hiring, or tenure decisions said they give more credit to peer-reviewed papers than preprints; less than 12% said they credit both types equally. “Nobody has time to read preprints from 30 candidates for a position or award to determine their value,” an associate professor wrote in another survey comment. “Thus, we use journal [publications]. At least as a reviewer, we know there has been some bar surpassed.”

To help readers better judge the quality of preprints, Ni’s preprint suggests that preprint server managers find automated ways to summarize the rigor and transparency of each manuscript they post. Ng, who co-authored a 2024 survey of biomedical researchers’ views on preprinting, cautions that any such indicators “would need to strike a careful balance [to] avoid the oversimplification of research quality into a single score or checklist.” He argues professional evaluators need to judge the transparency and rigor of applicants’ research for themselves. “If institutions want to encourage open science practices, they need to ensure that researchers are not penalized, either explicitly or implicitly, for sharing their work early.”

Read More

Why we should look beyond grades to spot potential in STEM

From ScienceMag:

The girl in the lab coat was extracting DNA from a piece of lettuce. She held the pipette like it was something sacred—like it might break if she breathed too hard. Beside her, a boy adjusted his goggles, avoiding eye contact. He didn’t ask a single question. Not because he didn’t have any, but because somewhere along the way, someone taught him to stay quiet. Outside those walls, their parents were at work under the Arizona sun, harvesting the same crop. They pulled lettuce from the earth to feed the country. Their kids pulled out its genetic material to understand it. The overlap was intentional. In this 1-week summer camp, we aimed to show students that there is a path from the agricultural work their communities have done for generations to STEM.

The program was personal for me because I, too, grew up in an agricultural town, the son of immigrant farmworkers. Schools were underfunded, the guidance counselor overworked, and expectations modest. College wasn’t the assumed path—it was the exception. I know what it’s like to sit in classrooms that prepare you for labor, not leadership, and to feel the quiet sorting that tells some students they belong in universities and others they don’t.

I graduated high school with a 1.9 GPA, so community college was my only option—and even then, I struggled. My first year was marked by a string of withdrawals and failing grades, culminating in a 0.0 GPA. But slowly, class by class, I found my footing. A few instructors encouraged me to stay with it, and eventually I was able to transfer. A decade and a half later, I had earned a master’s degree at Johns Hopkins University and a doctorate from Harvard University—outcomes the student I once was could never have imagined.

Today, I am a tenure-track faculty member at Arizona State University, a role that still feels improbable given my beginnings. Shortly after I started, state education officials approached my academic unit with an idea: to launch a STEM program for students from migratory farmworker families, a group that is underrepresented in science despite descending from generations of agricultural knowledge holders.

I know what programs like this can make possible. I am a product of federally supported training programs that intervened at critical stages in my own education. When I was an undergraduate student, for instance, a Department of Education program for students from disadvantaged backgrounds helped nudge me toward doctoral study when that path still felt distant. I have long believed that genius is evenly distributed across society, and that it just needs room to surface through exposure to science.

So I accepted the state’s challenge, and with colleagues developed a program that enrolled 50 to 80 high school students each year for four summers. Students lived in dorms, ate in dining halls, and rotated through immersive, hands-on labs led by faculty. Designed to replicate the university experience, the weeklong program aimed to make science tangible and accessible. Evaluation across cohorts showed consistent gains, including increased interest in STEM careers as well as meaningful rises in college aspirations. On paper, the program worked.

My favorite outcomes, however, were ones not captured by numbers. For many students, this was their first time away from home. They arrived shy and guarded, unsure how to introduce themselves or how to relate to the academic world. As the days progressed and they stepped into university labs and saw people who looked like them, they began confidently asking questions and talking openly about wanting to become doctors and researchers. By the end, they were reluctant to leave the community they had built in just 1 week.

The experience stayed with me because I was once the student with a 1.9 GPA, unsure of my place, waiting for someone to see potential that the data could not show. I could identify with the girl who once hesitated with the pipette and now steadies it with confidence. I see myself in the boy who had not asked a single question but now leans forward, curious and engaged.

I wish the system did a better job of looking beyond traditional academic metrics when assessing potential. Because watching these students, I am reminded how transformative that moment can be when someone finally sees in you what you could not yet see in yourself.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More

NIH reneges on recognizing union for early career researchers

From ScienceMag:

A union representing thousands of early-career scientists who work in labs run by the U.S. National Institutes of Health (NIH) received notice this week that the agency would no longer recognize the group “in its entirety.” It isn’t yet clear how the move, which union members say is illegal, will affect the contract agreed to by NIH that the union ratified in December 2024.

“Management’s refusal to follow the contract would jeopardize the raises, guaranteed health insurance, guaranteed leave time, and the protections for our professional development and safe workplaces that we won,” union leaders wrote in a 3 March email to members.

The union, called NIH Fellows United, represents roughly 5000 graduate students, postdocs, postbaccalaureate researchers, and others who work on a nonpermanent basis at NIH’s in-house research facilities in its main Bethesda, Maryland, campus and other locations. Many were brought into the agency through training programs designed to give early-career researchers a chance to develop their scientific and professional skills—a fact that NIH officials flagged in their email repudiating the union that was sent to its leadership on 2 March.

The notice, which has been reviewed by Science, states that trainees in these programs are not “employees” and that the union should never have been certified in the first place by the Federal Labor Relations Authority (FLRA), the agency that oversees unions made up of federal employees.

The employee argument has long been used by opponents of graduate student unionization efforts at universities, who say the work students perform is part of their education and that therefore they are not employees with a right to unionize. The main federal entity that has wrestled with the issue is the National Labor Relations Board (NLRB), which oversees unionization at private universities. For the past decade, NLRB has allowed graduate students to form unions. But the issue resurfaced in 2019, when—during President Donald Trump’s first administration—the board proposed a rule stating students aren’t employees. The rule never went into effect.

NIH itself had initially signaled in 2023 it would oppose the formation of NIH Fellows United on the grounds that its trainees weren’t employees. But it later backed away from that argument and allowed nonpermanent researchers to vote on whether to unionize. The union was officially certified by FLRA in December 2023 after NIH fellows voted 98% in favor of forming a union. It marked the first time scientific trainees won the right to unionize within the federal government.

When contacted by Science, an NIH spokesperson declined to comment on why the biomedical research agency, which also disburses billions of dollars in grants to universities, is changing its stance on the union now. “NIH cannot comment on active labor relations matters,” they wrote. The 2 March email to union leaders states that NIH plans to file a petition with FLRA, presumably seeking to decertify the union.

The leaders of the NIH union also declined to comment. In an email to their members, they wrote that they were “working closely with our legal counsel to understand the full implications of this notice and develop a comprehensive response. … We will fight this with the full strength of our membership, our national union, and our allies.”

Read More

When I lost my university email, my identity as a scientist took an unexpected hit

From ScienceMag:

I had known my contract was ending. I had just completed the final interview for a position abroad and was already preparing for the move. But when a message arrived saying, “Your university email account will be deactivated in 30 days,” I felt strangely unmoored. For early-career researchers like me, the global academic landscape can feel daunting. Permanent positions are scarce, competition is intense, and many of us move from one temporary position to another, often across countries and continents, trying to build a scientific identity. Losing my institutional email address felt like losing a small but vital piece of the scientist I had become.

My academic journey began with a Ph.D. in my home country of China, followed by a postdoctoral fellowship in Saudi Arabia, and then a series of positions in Australia that were either tied to a grant or temporary. Each move brought expanded research directions, wider collaborations, greater responsibility, and deeper engagement with students, but none came with long-term security. At times, the path forward felt exciting; at others, deeply tenuous. Constant relocation was hard for my family as well, requiring us to adapt to new cities and communities while I tried to maintain momentum in my work.

When I arrived at my most recent position in 2021, I was eager to prove myself. I was appointed to a contract faculty role, responsible for leading a small research group while establishing an independent research program. My days were filled with troubleshooting experiments, writing manuscripts, drafting grant proposals, and learning to mentor my first students. I began to form collaborations across time zones, and my institutional email became the channel through which these relationships took root. Through that address, I submitted manuscripts, coordinated projects, reviewed papers, and answered late-night questions from students testing out their first ambitious ideas. Messages also arrived from prospective Ph.D. students, some of whom would later join my group. My scientific life gathered there, thread by thread.

As my contract neared its end late last year, I focused on preparing for the transition and helping my students and researchers find new positions within the university. Still, the deactivation notice felt like a door closing, faster than I was prepared for.

I scrambled to reroute everything to my personal and former professional addresses until I gained access to a new one. Inevitably, things slipped. I nearly missed invitations to contribute to special issues, and several manuscript review requests went unnoticed until I logged into the submission portals by chance. A former mentee’s request for a reference letter was delayed. Some collaborators’ messages never reached me. One former student who urgently needed help with her manuscript eventually tracked me down through social media. Each disruption reminded me how much of academic life relies on simply being reachable. Writing from a generic address felt different, as though my professional standing had been diminished. Fairly or not, an institutional email address signals belonging—to a department, a university, a scientific community.

Universities often speak of lifelong learning and long-term impact. Moving through different institutions has shown me how meaningful it can be when those values extend even to the smallest details. Years ago, after finishing my first postdoctoral fellowship, I expected my email access to disappear the moment my university ID card stopped working. Instead, it remained active for years. Every so often, former colleagues sent holiday greetings or shared good news. Readers wrote with questions about publications still tied to that email. Those messages reminded me that even though my contract had ended, my place in that community had not. By allowing researchers to keep their institutional email address for at least 6 months after their position ends, universities could better support those of us navigating the uncertain early stages of an academic career.

I am now preparing to settle into a tenure-track position in China. For the moment, I rely on my personal and former institutional addresses for academic tasks while waiting to form new connections through my new professional email. As researchers, we are always building—and rebuilding—a sense of belonging.

Do you have an interesting career story to share? You can find our author guidelines here.

Read More