The U.S. National Science Foundation (NSF) has chosen a record number of students this year to receive its prestigious graduate fellowship, rebounding from last year’s unusually small cohort. The size of this year’s class, announced today, together with a more traditional distribution across fields, could ease fears that NSF, under pressure from President Donald Trump’s administration, had decided to shrink and alter the nature of a program that has supported 50 future Nobel laureates since it began in 1952.
“My take is that the STEM community’s activism around last year’s cuts appears to have had significant positive impacts on this year’s class,” says Susan Brennan, a cognitive psychologist at Stony Brook University and former fellowship program officer.
This year’s class returns to a more familiar distribution, with engineering and biology once again leading the pack. But whereas in 2023 and ’24 those two disciplines each garnered about one-quarter of the awards, this year saw a shift toward engineering, with some 35% of the awards going to students in that area, followed by 19% in biology. At the same time, the life sciences was the discipline most represented among the honorable mentions, comprising 40% of the 1440 total. Among awardees, computing and information science saw a slight uptick, from 7% in 2023 and ’24 to 10%. And psychology took a hit, from 5% and 6% to 2% of the overall pie.
The NSF fellowship provides students with an annual stipend of $37,000 for 3 years and gives their institutions $16,000 annually to defray tuition and other educational costs. It’s also a portable scholarship, in contrast to the typical arrangement in which a graduate student’s support is tied to their institution, either from their adviser’s research grant or a graduate training program in a particular field. NSF says it received nearly 14,000 applications for this year’s class.
NSF has launched an initiative to identify high-tech companies willing to contribute to the support of future classes of graduate fellows. But an agency spokesperson says, “NSF plans to support [this year’s] fellows with available appropriated funds in [fiscal year] 2026.”
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-04-13 17:30:502026-04-13 17:30:50NSF names record number of graduate fellows, rebounding from 2025 dip
A rattle and a loud banging noise suddenly rang out in the lab I had recently joined as a Ph.D. student, and I realized I was to blame. When placing tubes into a centrifuge, I had failed to make sure they were perfectly balanced. My mistake was clear the second I turned it on. I couldn’t switch it off until the cycle finished, so I stood there, frozen, praying the large machine wouldn’t topple off the counter as it shook. When it was all over, a senior postdoc tried to cheer me up by saying, “These things happen.” But I was mortified.
I had always been a top student, and when I started my graduate program I had already completed medical school. I expected excellence from myself—not mistakes. Any misstep, in my view, was a sign that I might not be cut out for this kind of work at all.
I was determined to carry on and have no more mishaps. But the lack of a clearly defined curriculum in graduate school meant I did not always know the rules. I was used to structured systems, clear milestones, and prescribed paths. Suddenly, I was expected to build everything from scratch, while trying to steady myself on what felt like a rocking boat.
As the months went on, the mistakes continued to pile up. My first attempt at DNA extraction failed, and I feared I might never get it right. That fear deepened when a couple more attempts failed, too. Then I mixed up the results and methods sections in my writing, simply because I didn’t yet understand how scientific writing worked. When my supervisor told me I should be expanding how I did the experiments under the methods section and not the results, I was overwhelmed with the thought: “I should have known this.”
Even though these moments are a normal part of graduate school—students are there to learn, after all—I had trouble accepting my errors and moving on. As I kept making blunders, it felt like all eyes were on me, judging me.
The fear became so overwhelming that I began to pull back and stop trying so hard. I no longer showed up to the lab with a desire to ace everything. I didn’t speak up and ask questions. And when I had an idea for a new experiment, I was afraid to give it a try. I didn’t want to give others any more evidence of my inadequacy.
Then, one night I had a dream. I knew I was suffering, and I watched myself, with deep compassion. That dream showed me something I hadn’t been able to grasp in waking life: I needed to treat myself with the same compassion I would offer a dear friend.
If a friend had made a mistake, I would have told them to take it lightly, to see it as part of growing, and even to welcome it as a necessary part of learning. That realization led me to change my reaction after something went wrong. I began to treat myself like that friend. That shift in mindset made it easier to avoid being crushed by the weight of failure.
I also started to log my mistakes so I could learn from them. I would note whether there was something I could have done differently. Then I would move on. This simple practice helped me grow, even if that growth was messy. Flipping through my log, I could see how much I had learned to do better.
As I became more comfortable with the idea of making mistakes, I opened up with peers and mentors. What I heard from them was striking: Almost all of them had committed mistakes—small or large. I began to see that mistakes are part of life for any scientist who is learning and doing anything new, a realization that’s made it easier for me to navigate the uncertainty and pressures of graduate school.
The reality is, we are all going to mess up. I now realize that’s OK, even necessary. “Doing better” comes from “doing first,” often with stumbles along the way.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-04-09 14:22:312026-04-09 14:22:31I was debilitated by mistakes in grad school. A dream reshaped my perspective
“Have you considered running for elected office?” My friend’s question didn’t come out of nowhere. I was active in my community as a volunteer, especially in environmental and social justice causes, and I regularly met elected officials and advocated for issues I cared about. But the question still took me by surprise. As a tenured professor and dean, my academic identity was firmly established. Was politics even something that academics did?
By the usual measures, I was successful. I had good funding, a solid publication record, and I had been promoted to serve as dean of engineering at the liberal arts college where I work in New York state. I enjoyed my leadership role and my research. But I did have reason to think about moving in another direction.
My most cited paper was a nice article with some juicy math—3D vector calculus in non-Cartesian coordinates!—but the work had little relevance to everyday issues. That always bothered me.
So did academics’ reluctance to speak out about policy. I had noticed that even when scientific papers did have findings worth sharing with the public or government officials, they tended to bury phrases like “We recommend that policymakers do X” near the end. There was an unstated assumption that a scientist’s role is to inform policy, not help enact it. That stuff was done by other people.
And as a researcher who’s had his share of scientific disagreements with other researchers, I have been able to work with others whose viewpoints differ from mine—an approach that is needed in these times of intense political polarization.
Ashok Ramasubramanian
Templeton Institute at Union College
When I turned 50, I also started to ask myself uncomfortable questions about my own future, such as, “What can I do with the time that is left to me?” I wondered whether I would have regrets if I did not serve my community more directly. After fulfilling my dean duties, I only had so much time left in the day. I realized I could not be an active researcher and engage in public service. To make it work, I would have to give up research.
I had just completed work on a major federal grant. So, I began to think the time might be right to consider running for a town council position. When the idea was only a nascent possibility, I broached it with my boss, our college’s vice president of academic affairs, and was pleased to discover that she was supportive. Our institution encourages community service and outside-the-box thinking, and administrators are generally happy for faculty to branch out. The idea is to help model lifelong learning—a value we work to instill in our students.
After much thought, I decided to take the leap. I closed down my lab space and liquidated all my research assets, turning them over to more junior faculty members, and began to spend my nondean hours going door to door and talking with voters.
It was a new world, and I had to learn a lot of new things quickly. My experience dabbling in research fields outside my own was helpful as I tackled activities that were new to me like fundraising, campaign finance reporting, and social media outreach. But I also leaned heavily into my favorite philosophy: “Fake it till you make it.”
I was pleased to find that voters in my community appreciated my candidacy. Being a scientist and an academic helped me stand out as a unique and qualified individual. And after I won my election and was appointed to the town council in January, I have tried to use skills I gained as a scientist to help my community. For instance, my experience writing research proposals is helpful when applying for grants aimed at infrastructure maintenance and green space preservation. And as a researcher who’s had his share of scientific disagreements with other researchers, I have been able to work with others whose viewpoints differ from mine—an approach that is needed in these times of intense political polarization. The time commitment has also been manageable, as council meetings are held in the evenings after normal work hours.
I miss many aspects of research, especially spending quiet time in my lab and mentoring students. But the experiences of running for office and serving the public as an elected official have been equally rewarding and fulfilling. I am not sure what my political future holds, but for now I am having quite a bit of fun serving my community in an official capacity. I encourage other scientists to ask hard questions about new ways to put their skills to work, especially in the second half of life.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-04-02 15:07:052026-04-02 15:07:05I worried my science wasn’t making an impact. So I ran for elected office
Experimental Error is a column about the quirky, comical, and sometimes bizarre world of scientific training and careers, written by scientist and comedian Adam Ruben. Barmaleeva/Shutterstock, adapted by C. Aycock/Science
Meredith Cimmino had been careful to avoid artificial intelligence (AI) tools when writing her dissertation. But when her Ph.D. committee at Rutgers University recommended she check for any new publications in the field, just to make sure her references were up to date, she thought it wouldn’t hurt to ask ChatGPT a quick question. “Everybody’s been talking about using AI to look things up,” Cimmino wrote to me, “so I’m like, ‘Oh let me just go look.’”
Sure enough, the AI tool immediately spat out a list of articles she had never heard of (and, if it operated the way I’ve seen ChatGPT operate, it probably started with an off-putting compliment like, “That sounds like a dynamic research field!”) At first, Cimmino was ecstatic. Not only could she update her paper with these references, but she could also bolster her conclusions. The titles and AI-generated summaries of the papers’ findings seemed to strongly support her own.
But the deeper she dug, the more she questioned the list ChatGPT had given her. First and foremost, she told me, the mere existence of this plethora of supportive studies sounded “too good to be true”—because, as a Ph.D. student who had been researching the field for years, why hadn’t she heard of any of the papers? “So, I go look up the studies,” she explained. “And they don’t exist.”
To be clear, these fake references are very, very convincing. They’re not like the agrammatical crypto phishing scams we’re all used to. (“The IRS hopes to giving your refund! Click this Belarussian website domain for money flavors!”) They use realistic names, real journal titles, plausible summaries, and they appear in response to your own highly specific question.
This is partly the fault of how AI operates. Under the hood, it doesn’t just search for the right answer to your query—it also asks, “What would an accurate and helpful response to this prompt look like?” Sometimes it decides it would look like a real accurate response. But sometimes it favors the “what would one look like” part of its algorithm, and then it gets to work generating references that resemble the sort of thing you’re hoping to find.
Just to see what would happen, I opened ChatGPT and referred it to this column, telling it to examine my back catalog of about 180 Experimental Error articles. Then I asked it to name five articles I’ve written about AI and give a short summary of each. I asked this question knowing full well that I’ve only written about AI once or twice, and a correct response would either be to point this out, or maybe to name a few columns I wrote that weren’t exactly about AI but maybe had AI-ish elements in them.
Nope. It just hallucinated.
First it cited an article correctly, a piece published in May 2025 about researchers asking AI to summarize scientific papers. But then it cited four more articles that never existed. Each article had a plausible title. One was called “Reviewer 3 Is Now a Neural Network.” Another promised that I had tackled the provocative question: “Should You Let AI Design Your Experiments?” But I never wrote these articles, and based on a Google search, neither did anyone else. The AI engine didn’t just misattribute someone else’s writing to me, it generated new article titles that no one wrote and swore they were mine.
ChatGPT even gave each article a lovely little (fake) summary. For example, under an article titled “Chatbots in the Lab: Helpful Assistant or Liability?” it commented, “Ruben reflects on the growing use of conversational AI tools by students and researchers—for coding, writing, and troubleshooting experiments.”
I know these articles don’t exist because I’m me. But unless the searcher independently tries to find them, how would they know the truth? Who in the world could be expected to know I’ve never written these articles when AI cites and summarizes them so convincingly?
I continued the conversation. “Adam Ruben never wrote articles 2-5 in that list,” I typed. “Did you hallucinate them?” The reply was very honest, in both a refreshing and terrifying way: “Yes—you’re right to call that out,” it began. “I did hallucinate articles 2-5 in my previous response.”
Then it described in detail why it may have hallucinated: “This is a classic hallucination pattern: I had one real anchor (the May 2025 AI article). I extrapolated similar-sounding topics consistent with his column. I failed to verify each item against a reliable source.”
Well, for goodness’ sake.
That’s the same problem some researchers have. And one might say any scientist who cites a paper they’ve never read deserves to be called out for fraud, or at least for their concerning lack of due diligence. But think about all the papers you’ve had your name on. Have you read every reference in those papers? When the first line of your article is “[Subject] has been extensively studied1-28,” have you read all 28 of those references? Your time is limited, articles are often behind paywalls, and lots of older work hasn’t been digitized. If reference No. 25 is a 60-year-old paper in a journal that your institution doesn’t subscribe to—but you’ve seen it listed in other papers as one of the seminal publications in your field, and you’ve read an abstract—would you really leave it out, and risk failing to pay tribute to something important? Or would you do what everyone else does, and keep it in?
Luckily, one solution is to use a tool we’ve already developed: our skepticism. Our assumption that information is likely wrong, until we see reasonable evidence otherwise, is part of what makes us successful as scientists. Now, we just need to apply it to citations as well.
And by “we,” I mean all of us: scientists writing papers, scientists reading papers, and even—and especially—the scientific journals that evaluate and publish our papers.
We need to do this to make sure our own work is sound. But we also need to ensure we’re not awarding these bogus references credibility. If Cimmino hadn’t tried to chase down the citations AI had recommended, she might have pasted them into her thesis—and then a future student, hoping to build on her research, would have had all the more reason to believe these articles, and their conclusions, were real.
Researchers are developing new tools to double check the veracity of citations as well. Publisher Elsevier, for example, now offers a program called LeapSpace that includes a “truth card” with each result to explain whether a reference supports, refutes, or is neutral about a conclusion. In other words, it fights the problems of AI by using … what we hope is better AI.
A few days after telling me her story, Cimmino sent another short message. She realized she had referred to AI throughout her story as “they,” and she asked me to please change “they” to “it.”
“I didn’t know it was making them up,” she wrote of the hallucinated citations. “I know AI is not real.”
I hope we all do. But it’s easy to forget, isn’t it?
My heart raced as I walked into the classroom, where 200 curious medical students were waiting for me. While the technician fitted my microphone, I gripped the podium and scanned the sea of expectant faces. After years of turning down opportunities to teach, I’d finally agreed to give it a go—and I was terrified. I dove in and felt myself whizzing through my slides, trying to get through the material before my nerves got the best of me. After a few minutes, a student raised her hand and asked me to slow down. I felt my face go red—had I messed up my first ever lecture?
I never imagined I would find myself in front of a classroom. As an introvert and a nonnative English speaker, I found the prospect daunting. I’d see colleagues face hundreds of students and give fluent, engaging lectures and think I could never match up to them. Instead, I decided I’d stick to research, where I felt comfortable running experiments, applying for grants, and mentoring individual students.
A few years into my postdoc, however, my mentor asked whether I could teach her class while she sat on a grant-review panel. Out of respect for her, I said I’d give it a try, though I was nervous. I spent hours preparing, listening to recordings of her past lectures and cramming her slides with extra information, worried I’d forget what to say.
So I was embarrassed when, just minutes into the lecture I’d so meticulously prepared for, the student told me she was having trouble keeping up. I paused, took a breath, and adjusted my pace. And to my surprise, the energy in the room shifted. Students leaned in and asked questions, and I began to feel more of a connection to what I was teaching. I was back on track. By the end of the class, nothing had gone terribly wrong and I was relieved to have fulfilled my obligation to my mentor.
Later, when I watched the recording of the class, I could see my teaching get better as the lecture went on, and I began to get excited by the prospect of improving further. With my mentor’s support, I decided to take on more classes.
And so a single lecture grew into a regular commitment and eventually a responsibility I embraced. It took time and practice to become a confident, engaging teacher, but student feedback and teaching evaluations helped. After a student told me my slides had too much text, for instance, I redesigned them to include more visuals and fewer words, and found that this change helped make discussions more interactive. Eventually, I was offered a position designing courses as well as teaching them, something I had never anticipated in my career path.
At first, I worried teaching would distract me from the relentless demands of maintaining a funded research lab. But I actually found it sharpened my focus and transformed how I communicate science to colleagues and funders. Preparing lectures required me to revisit fundamentals I hadn’t thought about in years, keep up-to-date with new science, and learn to clearly explain complex ideas. In the lab and at conferences, I slowed down and focused on explaining concepts and protocols clearly, resulting in better discussions and more collaboration. I even secured a major grant—proof that clarity and connection matter as much in funding proposals as in classrooms—and my teaching experience helped me gain an earlier than anticipated promotion to my next faculty appointment.
Looking back, saying “yes” to teaching was one of the most transformative decisions of my career. It didn’t just make me a better educator; it made me a better scientist. For anyone nervous about the prospect of teaching, I can only recommend giving it a go. It’s common to worry about language fluency, feeling exposed in a room full of brilliant minds, or being pulled away from research duties. But anyone who thoroughly understands their subject can become a better communicator with practice and by refining their approach over time. Sometimes the most fulfilling academic life is not the one we first imagined, but the one we build through both intentional choices and unexpected experiences.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-26 14:40:552026-03-26 14:40:55How I got over my fear of teaching
I was in the fourth year of my Ph.D. in tumor immunology when I gave a talk at a major international conference. I had rehearsed every slide, every transition, determined to present my results as a coherent scientific story. But near the end I paused and said something I had not practiced. “This research is personal; I’m not only a researcher, but also a survivor of childhood leukemia.” The words surprised me as soon as they left my mouth. I felt I had crossed an invisible professional line I had spent years trying not to approach.
I was diagnosed with acute lymphoblastic leukemia when I was 3 years old. My earliest memories are not of classrooms or playgrounds, but of hospital rooms and seemingly constant fatigue caused by the chemotherapy drugs.
With treatment, I eventually went into remission. But as I grew older and learned the biology of leukemia, one idea unsettled me: The immune system designed to protect me had failed. Cells had multiplied without restraint. Signals meant to maintain order had broken down. Biology became personal for me. I became obsessed with the questions of what cancer is and what survival means biologically. Pursuing science didn’t feel like a career choice; it felt like picking up an unfinished story.
Yet when I entered graduate school, I did not tell anyone about my history—not my lab mates, not even my adviser. I thought professionalism meant keeping my personal life separate from my scientific one. But that separation required constant vigilance. When conversations turned to hospital appointments, childhood, illness, or what had brought us to cancer research, I learned to redirect gently or stay quiet. I answered honestly, but never fully. I worried disclosure might affect how I was seen. Would colleagues doubt my stamina? Would mentors hesitate to invest in me? Would I always be “the survivor” instead of simply a scientist? Would people think I was leveraging sympathy to earn a place in science?
I left India to pursue research abroad, first in the United States, then Israel, and eventually the United Kingdom. In the lab, I felt capable. Outside it, I often felt uncertain. There were evenings alone in my apartment when the distance from home felt vast. In those moments, I sometimes thought about the child I once was, lying in a hospital bed, exhausted, dependent on treatments developed by researchers I would never meet, who had chosen to dedicate their life to understanding diseases like mine. Slowly, I began to realize I was becoming that researcher myself. That thought didn’t make the path any easier, but gave it meaning.
I spoke out at that conference because of a realization that had been slowly coalescing for years: I could no longer keep my personal history and my profession in separate compartments. I did not expect my revelation to alter anything beyond that room. But in the weeks that followed, I began to see that many of my fears had been unfounded. Colleagues did not question my professionalism; they understood my urgency, and our conversations deepened. A student confided that she had her own medical history she rarely mentioned. Later, after I’d become more accustomed to sharing my story, a young patient told me hearing my story made a scientific career feel imaginable.
The shift was internal as well. Previously, a negative result in the lab could send me spiraling into self-doubt. Now, the setbacks are still frustrating, but they no longer feel existential. I remind myself I’ve already survived something far less predictable than an assay that did not work. Flawed experiments have become part of the process, not a measure of my worth.
That day at the podium the words arrived before I fully understood why. Only later did I recognize that it wasn’t my past that had weighed on me, but rather the effort of keeping it separate. Being a survivor doesn’t make me a better scientist, but it shapes how I think about my science. It gives context to long hours and the slow pace of discovery. My personal story has become part of my identity as a scientist, not as a credential, but as a reminder of why the questions matter, and why I chose to ask them.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-19 14:25:552026-03-19 14:25:55How opening up about being a cancer survivor reshaped my Ph.D. journey
It is a truth universally acknowledged that any scientific conference, no matter how fascinating, will become a snoozefest—usually during the time slot just before lunch.
But during one such conference a few years ago, Stefano Mammola, an ecologist at the Italian National Research Council, serendipitously found a loophole to this iron law. After sitting through several dull talks, he started chatting with fellow attendee Victoria Stout, an environmental scientist at the University of Colorado Boulder who moonlighted as a stand-up comedian. They quickly realized they had both made the same observation: Whenever a speaker cracked a joke, the audience instantly became more alert and engaged, the speaker appeared more approachable, and the talk itself became more memorable. “When somebody uses humor in an effective way, I recall the information much better in the future,” Mammola explains.
The pair realized they had the makings of a research project on their hands: a comprehensive analysis of how scientists use humor as they relay their findings. “Scientists attend many conferences,” Mammola says. “Why not collect some data?” Over the next few years, the two researchers—along with a growing number of interested collaborators—attended 14 biology-related conferences, collecting data on the use of humor across 531 talks.
As the team reports today in the Proceedings of the Royal Society B, scientists take themselves very seriously indeed: Most presentations contained no jokes at all or just a few. When they did occur, jokes tended to cluster at the beginning and end of presentations, and the majority either fell flat or elicited only polite chuckles—although the authors noted a bump in successful jokes midway through talks. Men speakers told more jokes, and jokes from men and native English speakers tended to get more laughs.
Mammola spoke to Science about the findings and about the potential of humor to improve science communication. The interview has been edited for brevity and clarity.
Q: How did you decide what counted as a successful joke?
A: We really wanted to capture any deliberate attempt to make people laugh, whether it was orally delivered, a gesture, or visually depicted—a meme in a slide, for example. The last two categories were more obvious, while the first was a bit more subtle. But when I was sitting together with Victoria Stout and trying to score independently, we realized that we mostly agreed when somebody was attempting a joke.
We also assessed joke success, which was not easy to standardize. We used this three-level breakdown, where if nobody was laughing or just a few people, that’s low success. Medium success is more or less half the room, and high success is when more or less the whole room starts laughing.
Q: Which jokes were the most successful?
A: We didn’t find any pattern with respect to the type of joke. Of course, some types of jokes are more frequently used, but there was not a single type of joke that was more successful than others. Anything can make people laugh or not—it’s more the delivery and timing.
In general, jokes cluster at the beginning and end of presentations. As you start, you want to engage with the audience and connect with them, or maybe you’re a bit nervous, so you throw in a joke to break the ice. And then at the end, you relax a bit. Maybe you want to leave people with a good, lasting impression.
This pattern was quite ubiquitous among all groups, except students didn’t joke as much at the beginning of talks. This category is the one with less public speaking experience, so they may be more nervous. They had the same peak at the end, so as the presentation progressed, that nervousness probably went away, and they managed to catch up.
Stefano Mammola delivering a characteristically animated plenary lecture at the 35th European Congress of Arachnology in Zadar, CroatiaTin Rožman & Iva Cupic
Q: You also saw a bump in successful jokes halfway through the presentation. Why do you think that is?
A: When you are speaking to an audience, you realize at some point that you’re starting to lose them. Their minds start to drift. It’s inevitable. And I think an experienced speaker, about halfway through a presentation, is able to throw in a very nice joke to re-engage the mind.
Q: What other trends did you observe?
A: Male speakers said more jokes on average, and jokes delivered by male and native English speakers tended to attract more medium- and high-intensity laughter. Are they really better at telling jokes, or is it that people are more willing to laugh? Joking is a risky activity, because we have this idea that scientists should be serious, and the ability to take risks is not equally distributed. It’s a powerful reminder that inequality in academia affects so many things. I think part of the solution is changing the status quo, discussing these issues, and exposing them.
Q: What do you want people to take away from this research?
A: One conclusion is just the importance of thinking about science communication. The information system in science is increasingly polluted. There are so many papers, so many conferences, so many talks, so much information. The ability to stand out from the crowd and effectively engage your audience is really important and something we need to actively think about.
Q: Of all the jokes you heard, do you have a particular favorite?
A: I cannot come up with a single joke, but what is most effective to me is when people use their bodies, when there’s something totally unexpected in the way the speaker moves. To me, these are the most successful. I also really got to dislike all the stereotypical jokes from speakers talking right after lunch. I guess it’s inevitable, but my data tells me it doesn’t work. You have to be creative.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-17 20:25:342026-03-17 20:25:34Scientific conferences can be a bore. Can jokes liven them up?
The other day, a new research idea struck me. The conceptual path was clear, but the execution would require real effort—synthesizing the literature, writing code, training models, performing statistical analysis. Just a few years ago, the next step would have been a no-brainer. I would recruit a graduate student into my lab and allow them to run with the project, providing guidance along the way. This time, an uncomfortable thought crept into my head: Should I just give these tasks to artificial intelligence (AI) rather than take a chance on a student?
I thought about the skills I had when I started graduate school more than a decade ago, and how much mentoring it took to get me where I am today. I had zero research experience when I emailed faculty to say I was interested in computer science Ph.D. programs. I did my basic due diligence, reading up on what they worked on. But sitting in their offices, listening to them talk about robotics, algorithms, and natural language processing, I had little to no clue what these concepts really meant.
One professor saw past my ignorance and agreed to take me on. I was incredibly grateful for the opportunity, but the first few months were a harsh reality check. I worked tirelessly—reading, summarizing, drafting ideas, and trying to make sense of it all. Yet, whenever I would present my work to my adviser, she would look at the nonsense I had presented, give me feedback, and send me back to start from scratch.
I thought about quitting. I felt I was constantly disappointing her. But she didn’t give up on me. Perhaps she believed in my potential, perhaps she saw I was doing the best I could, or perhaps she simply believed in the process of cultivating a scholar. It took a year or so of immense patience before I finally produced something we could build on. From there, I slowly transformed from a clueless novice into a junior colleague.
Years later, when I became a professor, I watched my own students struggle to make progress, just as I once had. My calendar filled up with meetings where my main job was to untangle their confusion. Eventually, though, the investment paid off, and I experienced the deep satisfaction of watching them transform into capable junior collaborators.
Now, AI has introduced a new option. It is certainly no extraordinary intellectual partner. But it can competently perform a lot of the work I need immediately; AI requires no ramp-ups, no meetings, and absolutely no emotional support. It is forcing a quiet, uncomfortable shift in my mindset.
The issue is not whether my students are valuable. In the long run, they are invaluable. The issue is that their value emerges slowly, whereas AI delivers immediate returns. I feel somewhat embarrassed to admit how tempting this is. In our culture, preferring an algorithm to a trainee feels like a betrayal of the academic mission.
Yet I see these calculations shaping the labs around me. Close colleagues are quietly refraining from taking on as many students as they used to. When they do take students, they are noticeably pickier.
My immediate instinct is to expect any student I recruit in this new environment to contribute at a much higher level from the outset. But to meet those elevated expectations, a student would likely rely heavily on the same AI tools I could turn to on my own. In the process, they may bypass the valuable experience of struggling through early tasks and learning from their mistakes. Students, I worry, could simply become an intermediary between the raw idea and the AI’s output.
For faculty, meanwhile, the pressure to produce remains relentless and the scientific pace is unforgiving, making a productive and frictionless AI even more tempting. The real danger I see is not that AI will entirely replace graduate students in the foreseeable future. It is that the default assumption that taking on students is simply part of any professor’s academic journey will quietly erode. In some cases, the most pragmatic solution could be to use an AI.
I’m not sure where that will leave students who start with no research experience. Personally, I am seriously tempted not to take a chance on a novice for my new project—which means today, I probably wouldn’t recruit my younger self.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-12 15:05:392026-03-12 15:05:39Why I may ‘hire’ AI instead of a graduate student
Nearly half of biomedical scientists worry preprints could spread shoddy research and misinformation, according to a new survey that could help explain why the life sciences have taken up the publishing practice more slowly than some other fields.
The survey is one of the largest to date to examine views of life sciences researchers on the practice of placing non–peer-reviewed manuscripts on public servers. The results, posted this week on the bioRxiv preprint server, also reveal that researchers on average do not believe publishing preprints enhances their career advancement. But many acknowledge benefits, such as spreading their findings more quickly than peer-review journals do and helping them find collaborators.
“This study makes a valuable contribution because it highlights the persistent tension between the benefits of rapid dissemination and the way research is evaluated,” says Jeremy Ng of University Hospital Tübingen, who studies health research methodology and was not involved in the new study. “Hiring, promotion, and funding decisions often still revolve around traditional journal publications.”
Biomedical preprints have become more common over the past decade and spiked during the COVID-19 pandemic. But previous studies have indicated larger shares of physicists and economists regularly post preprints than researchers in the life sciences. “We wanted to know what is stopping the [biomedical] community from adopting them to a larger extent,” says information scientist Chaoqun Ni of the University of Wisconsin–Madison, who led the new study.
The survey, completed by nearly 1800 biomedical researchers in the United States and Canada in early 2025, reveals substantial variations in the use of preprinting. Two-thirds of respondents read at least one preprint during the previous 2 years. Only about half of respondents had submitted one in that time span, and only one-third had cited a preprint. Junior scientists were more likely to embrace these practices.
Among respondents not reading or citing preprints, the most common reason was concerns over quality. Among all survey takers, 42% predicted a strongly negative effect on science from preprints that spread misinformation. In comments submitted with their survey answers, some respondents voiced strong reservations about the growing use of artificial intelligence (AI). “Professors [could] mass-generate preprints with AI,” wrote an unnamed associate professor. These could “crowd out legitimate scholars who are publishing at a slower pace because they are actually doing real studies and going through peer review.”
Worries about quality may come disproportionately from clinical researchers concerned that the lack of independent vetting of preprints could jeopardize patient safety, says Richard Sever, chief science and strategy officer at openRxiv, a nonprofit that operates the widely used bioRxiv and medRxiv preprint servers devoted to biomedical science. (The new study does not report responses separately by subdiscipline.)
But concerns over quality may be based more on researchers’ impressions than evidence, Sever says, noting that bioRxiv and medRxiv reject submissions that don’t use the scientific method or that pose obvious risks to public health. Preprinting a fraudulent manuscript exposes it to more scrutiny than if it appeared only in a journal, he adds. “If you get a reputation for being the person who always puts up stuff [on preprint servers] which doesn’t have complete data and is shoddy, then you’re done in academia.” What’s more, some 80% of preprints eventually appear in peer-reviewed journals. And despite their quality checks, journals publish problematic papers, he says.
Respondents to Ni’s survey also saw upsides to preprinting, with about half agreeing it can accelerate the dissemination of scientific findings compared with journals, where peer review can take months and much of the content is paywalled. That finding echoes results of a survey of 7000 bioRxiv and medRxiv users, conducted by openRxiv in 2023 and posted on 26 February, in which respondents praised fast dissemination of findings as a top benefit.
Only about 16% of respondents agreed strongly that preprints reduce the importance that professional evaluators—those who review grant applications or make hiring and tenure decisions—place on articles in subscription-based, selective, peer-reviewed journals. Shifting away from traditional journals is a goal that advocates of open science have touted and some funders have embraced. For example, in 2025 the Gates Foundation began requiring grantees to post as preprints all manuscripts that result from research it funds, and it stopped paying for researchers to publish their papers in journals that charge a fee to make papers free.
Still, many universities’ professional review procedures explicitly prefer or require peer-reviewed publications, Ni notes. More than 60% of the survey respondents involved in funding, hiring, or tenure decisions said they give more credit to peer-reviewed papers than preprints; less than 12% said they credit both types equally. “Nobody has time to read preprints from 30 candidates for a position or award to determine their value,” an associate professor wrote in another survey comment. “Thus, we use journal [publications]. At least as a reviewer, we know there has been some bar surpassed.”
To help readers better judge the quality of preprints, Ni’s preprint suggests that preprint server managers find automated ways to summarize the rigor and transparency of each manuscript they post. Ng, who co-authored a 2024 survey of biomedical researchers’ views on preprinting, cautions that any such indicators “would need to strike a careful balance [to] avoid the oversimplification of research quality into a single score or checklist.” He argues professional evaluators need to judge the transparency and rigor of applicants’ research for themselves. “If institutions want to encourage open science practices, they need to ensure that researchers are not penalized, either explicitly or implicitly, for sharing their work early.”
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-06 17:41:302026-03-06 17:41:30Career effects of preprints get mixed reviews from biomedical researchers
The girl in the lab coat was extracting DNA from a piece of lettuce. She held the pipette like it was something sacred—like it might break if she breathed too hard. Beside her, a boy adjusted his goggles, avoiding eye contact. He didn’t ask a single question. Not because he didn’t have any, but because somewhere along the way, someone taught him to stay quiet. Outside those walls, their parents were at work under the Arizona sun, harvesting the same crop. They pulled lettuce from the earth to feed the country. Their kids pulled out its genetic material to understand it. The overlap was intentional. In this 1-week summer camp, we aimed to show students that there is a path from the agricultural work their communities have done for generations to STEM.
The program was personal for me because I, too, grew up in an agricultural town, the son of immigrant farmworkers. Schools were underfunded, the guidance counselor overworked, and expectations modest. College wasn’t the assumed path—it was the exception. I know what it’s like to sit in classrooms that prepare you for labor, not leadership, and to feel the quiet sorting that tells some students they belong in universities and others they don’t.
I graduated high school with a 1.9 GPA, so community college was my only option—and even then, I struggled. My first year was marked by a string of withdrawals and failing grades, culminating in a 0.0 GPA. But slowly, class by class, I found my footing. A few instructors encouraged me to stay with it, and eventually I was able to transfer. A decade and a half later, I had earned a master’s degree at Johns Hopkins University and a doctorate from Harvard University—outcomes the student I once was could never have imagined.
Today, I am a tenure-track faculty member at Arizona State University, a role that still feels improbable given my beginnings. Shortly after I started, state education officials approached my academic unit with an idea: to launch a STEM program for students from migratory farmworker families, a group that is underrepresented in science despite descending from generations of agricultural knowledge holders.
I know what programs like this can make possible. I am a product of federally supported training programs that intervened at critical stages in my own education. When I was an undergraduate student, for instance, a Department of Education program for students from disadvantaged backgrounds helped nudge me toward doctoral study when that path still felt distant. I have long believed that genius is evenly distributed across society, and that it just needs room to surface through exposure to science.
So I accepted the state’s challenge, and with colleagues developed a program that enrolled 50 to 80 high school students each year for four summers. Students lived in dorms, ate in dining halls, and rotated through immersive, hands-on labs led by faculty. Designed to replicate the university experience, the weeklong program aimed to make science tangible and accessible. Evaluation across cohorts showed consistent gains, including increased interest in STEM careers as well as meaningful rises in college aspirations. On paper, the program worked.
My favorite outcomes, however, were ones not captured by numbers. For many students, this was their first time away from home. They arrived shy and guarded, unsure how to introduce themselves or how to relate to the academic world. As the days progressed and they stepped into university labs and saw people who looked like them, they began confidently asking questions and talking openly about wanting to become doctors and researchers. By the end, they were reluctant to leave the community they had built in just 1 week.
The experience stayed with me because I was once the student with a 1.9 GPA, unsure of my place, waiting for someone to see potential that the data could not show. I could identify with the girl who once hesitated with the pipette and now steadies it with confidence. I see myself in the boy who had not asked a single question but now leans forward, curious and engaged.
I wish the system did a better job of looking beyond traditional academic metrics when assessing potential. Because watching these students, I am reminded how transformative that moment can be when someone finally sees in you what you could not yet see in yourself.
Do you have an interesting career story to share? You can find our author guidelines here.
http://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.png00Vincent Barbierhttp://postdocinusa.com/wp-content/uploads/2017/04/Logo-PostdocInUSA-300x165.pngVincent Barbier2026-03-05 14:11:452026-03-05 14:11:45Why we should look beyond grades to spot potential in STEM
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Google Analytics Cookies
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
Other cookies
The following cookies are also needed - You can choose if you want to allow them:
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.