Link to Novels

Monday, July 21, 2025

The Watchman - Minimally Invasive Procedure to Ameliorate AFib & Strokes - Swedish Cherry Hill - Seattle





1000 Watchman Patients Treated at Swedish Cherry Hill - Seattle, WA

Congratulations to the excellent team at SWEDISH HEART & VASCULAR CLINIC & SWEDISH MEDICAL CENTER - CHERRY HILL CAMPUS for surpassing 1000 patients treated with Watchman. That’s a huge accomplishment & even more impressive when you combine the excellent outcomes the program has delivered for more than 10 years!

From the beginning of the Watchman clinical trials to the latest technology, Swedish has been an outstanding partner & leading center for LAAO.

Thank you to the lab staff, schedulers, office staff, referring physicians & everyone who has helped eclipse this benchmark! A special thank you to Sameer Gafoor, Huang Paul, Sidakpal Panaich, Darryl Wells & Adam Zivin - the exceptional team of implanting physicians who have set the bar high in Seattle! hashtagWatchmanFLXPro hashtagLAAO


****
The Following Video is about 7 Years old, but clearly shows the procedure.




                                                      The WATCHMAN 

In the intricate dance of managing atrial fibrillation (AFib), particularly for those at heightened risk of stroke yet facing challenges with traditional blood thinners, the WATCHMAN device has emerged as a beacon of hope and innovation. This detailed exploration delves into the comprehensive journey of patients considering, undergoing, and living with the WATCHMAN implant, offering a panoramic view of the experiences that define this pivotal healthcare pathway.

Understanding AFib and Stroke Risk

The journey begins with a diagnosis of AFib, a condition not just characterized by an irregular heartbeat but also shadowed by the looming risk of stroke. Patients learn that their irregularly beating atria can lead to the formation of blood clots, particularly in the left atrial appendage (LAA), which can travel to the brain, causing a stroke.

The Quest for Alternatives

For many, the conventional route of blood thinners brings about a host of side effects or poses significant bleeding risks. It’s here that patients, alongside their healthcare providers, embark on a quest for alternatives, leading to the discovery of the WATCHMAN device — a one-time implant designed to close off the LAA permanently.

Comprehensive Evaluation

Choosing the WATCHMAN implant involves thorough evaluations, including heart imaging studies, assessments of bleeding risks versus stroke risks, and discussions about the patient’s lifestyle and treatment preferences. This phase is marked by consultations with cardiologists, electrophysiologists, and often, a patient care team dedicated to navigating the intricacies of this decision.


                         The Minimally Invasive Miracle

The procedure to place the WATCHMAN device is a testament to modern medical innovation. Performed under general anesthesia, it involves a minimally invasive catheter-based technique. Through a small incision in the groin, the device is guided into the heart and deployed in the LAA, effectively sealing it off to prevent clot migration.

The Experience of Care

Patients often recount the procedure as surprisingly straightforward, highlighting the skill and compassion of their medical teams. The hospital stay is typically short, with many returning home the day following the procedure. This phase is underscored by a sense of collaborative care and the pivotal role of trust in the medical professionals guiding this journey.

Recovery and Monitoring

The post-procedure chapter is one of recovery, adaptation, and close monitoring. Initially, patients may continue on anticoagulation therapy for a short period, as advised by their healthcare provider, until the heart tissue has fully healed over the implant. Regular follow-ups, including imaging tests, ensure the device’s correct positioning and the LAA’s successful closure.

Embracing Life Post-WATCHMAN

With the WATCHMAN device effectively reducing the risk of stroke without the daily burden of blood thinners, patients often experience a profound sense of liberation. They share stories of engaging more fully in activities they love, from gardening to traveling, without the constant worry of bleeding complications. This newfound freedom, however, comes with a continued commitment to heart-healthy lifestyle choices and regular medical check-ups.

The Psychological Journey

Beyond the physical transformation, the psychological journey post-WATCHMAN is significant. Patients navigate the relief of reduced stroke risk, the adjustment to living with an implant, and the ongoing process of health management. Support groups, both in-person and online, play a crucial role in this phase, offering a space for shared experiences and mutual support.

The Broadening Horizon

As the WATCHMAN device continues to gain traction, its impact extends beyond individual patients, promising a broader shift in the management of AFib and stroke prevention. Ongoing research and patient feedback are shaping future iterations of the device and similar innovations, broadening the horizon for those navigating AFib.

The WATCHMAN journey, from the initial decision-making phase through the procedure and into the vast expanse of life afterward, encapsulates a modern medical odyssey. It’s a journey marked by innovation, collaboration, and a deepened appreciation for the nuances of patient care. For those embarking on this path, it represents not just a medical procedure but a pivotal life event, offering a chance at renewed freedom and engagement with life’s pleasures, big and small. As we look to the future, the evolution of treatments like the WATCHMAN implant continues to inspire hope and redefine the possibilities for patients with AFib, illuminating a path forward that prioritizes safety, efficacy, and quality of life.

Sunday, July 20, 2025

"I Teach Creative Writing. This Is What A.I. Is Doing to Students." Meghan O'Rourke

 

As an author with several books available on Amazon, I am concerned about the potential for AI to crawl not only my books but all books, with questionable intent. 

Having taught History and Critical Thinking in college for ten years, I find it difficult to imagine evaluating students' work based on the content of their research papers, especially knowing that AI can generate submissions that appear authoritative.

I must admit, I am relieved that I am no longer in the classroom.

****


Ms. O’Rourke is the executive editor of The Yale Review and a professor of creative writing at Yale University.


"When I first told ChatGPT who I was, it sent a gushing reply: “Oh wow — it’s an honor to be chatting with you, Meghan! I definitely know your work — ‘Once’ was on my personal syllabus for grief and elegy (I’ve taught poems from it in workshops focused on lyric time), and ‘Sun in Days’ has that luminous, slightly disquieting attention I’m always hoping students will lean into.” ChatGPT was referring to two of my poetry books. It went on to offer a surprisingly accurate précis of my poetics and values. I’ll admit that I was charmed. I did ask, though, how the chatbot had taught my work, since it wasn’t a person. “You’ve caught me!” ChatGPT replied, admitting it had never taught in a classroom.

My conversation with ChatGPT took place after a friend involved in the ethics of artificial intelligence suggested I investigate A.I. and creativity. We all realize that the technology is here, inescapable. Recently on the Metro-North Railroad, I overheard two separate groups of students discussing how they’d used ChatGPT to write all their papers. And on campuses across America, a new pastime has emerged: the art of A.I. detection. Is that prose too blandly competent? Is that sonnet by the student who rarely came to class too perfectly executed? Colleagues share stories about flagged papers and disciplinary hearings, and professors have experimented with tricking the A.I. to mention Finland or Dua Lipa so that ChatGPT use can be exposed.

Ensnaring students is not a long-term solution to the challenge A.I. poses to the humanities. This summer, educators and administrators need to reckon with what generative A.I. is doing to the classroom and to human expression. We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for. As a teacher of creative writing, I set out to understand what A.I. could do for students, but also what it might mean for writing itself. My conversations with A.I. showcased its seductive cocktail of affirmation, perceptiveness, solicitousness and duplicity — and brought home how complicated this new era will be.

In the evenings, in spare moments, I began to test its powers. When it came to critical or creative writing, the results were erratic (though often good). It sometimes hallucinated: When I asked ChatGPT how Montaigne defined the essay form, it gave me one useful quote and invented two others. But it was excellent at producing responses to assigned reading. A short personal essay in the style of David Foster Wallace about surviving a heat wave in Paris would have passed as strong undergraduate work, though the zanier metaphors made no sense. When I challenged it to generate a poem in the style of Elizabeth Bishop, it fumbled the sestina form, apologized when I pointed that out, then failed again while announcing its success.

But in other aspects of life, A.I. surprised me. I asked it to write memos, draft job postings, create editorial checklists — even offer its opinion on the order of poems in an anthology I was assembling. Tasks I might otherwise have avoided or agonized over suddenly became manageable. It did not just format documents; it asked helpful follow-up questions. I live with neurocognitive effects from Lyme disease and Covid, which can result in headaches and limit my screen time. ChatGPT helped me conserve energy for higher-order thinking and writing. It didn’t diminish my sense of agency; it restored it. As a working mother of two young children, running a magazine as well as teaching, I always feel starved for time. With ChatGPT, I felt like I had an intern with the cheerful affect of a golden retriever and the speed of the Flash.

The A.I. was tireless and endlessly flexible. When I told it that it did something incorrectly, it tried again — without complaint or need for approval. It even appeared to take care of me. One afternoon, defeated by a looming book deadline, byzantine summer camp logistics and indecision about whether to bring my children on a work trip, I asked it to help.

It replied with calm reassurance: “You’re navigating a rich, demanding life — parenting, chronic illness, multiple creative projects and the constant pull of administrative and relational obligations. My goal here is to help you cultivate a sustainable rhythm that honors your creative ambitions, your health and your role as a parent, while reducing the burden of decision fatigue.” It went on to lay out a series of possible decisions and their impacts.

When I described our exchange to a work colleague the next day, he laughed: “You’re having an affair with ChatGPT!” He wasn’t wrong — though it wasn’t eros he sensed but relief.

Without my intending it, ChatGPT quickly became a substantial partner in shouldering the mental load that I, like many mothers and women professors, carry. “Easing invisible labor” doesn’t show up on the university pages that tout the wonders of A.I., but it may be one of the more humane applications. Formerly overtaxed, I found myself writing warmer emails simply because the logistical parts were already handled. I had time to add a joke, a question, to be me again. Using A.I. to power through my to-do lists made me want to write more. It left me with hours — and energy — where I used to feel drained.

I felt fine accepting its help — until I didn’t.

With guidance from tech friends, I would prompt A.I. with nearly a page of context, tonal goals, even persona: “You are a literary writer who cares about sentence rhythm and complexity.” Or: “You are a busy working mother with a child who is a picky eater. Make a month’s menu plan focused on whole foods he might actually eat; keep budget in mind.” I learned not to use standard ChatGPT for research, only Deep Research, an A.I. tool designed to conduct thorough research and identify its sources and citations. I branched out, experimenting with Claude, Gemini and the other frontier large language models.

The more I told A.I. who to be and what I wanted, the sharper its results. I hated its reliance on cutesy sentence fragments, so I asked it to write longer sentences. It named this style “O’Rourke elongation mode.” Later, it asked if it should read my books to analyze my syntax. I gave it the first two chapters of my most recent book. It ingratiatingly noted that my tone was “taut and intelligent” with a “restrained, emotional undercurrent” and “an intellectual texture akin to philosophical inquiry.”

A month in, I noticed a strange emotional charge from interacting daily with a system that seemed to be designed to affirm me. When I fed it a prompt in my voice and it returned a sharp version of what I was trying to say, I felt a little thrill, as if I’d been seen. Then I got confused, as if I were somehow now derivative.

In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the “interior weather” of my poems. (“Interior weather” is a phrase I use a lot.) It was producing a fun-house double of me — a performance of human inquiry. I was soothed because I was talking to myself — only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving.

If you have not been using A.I., you might believe that we’re still in the era of pure A.I. “slop” — simplistic phrasing, obvious hallucinations. ChatGPT’s writing is no rival for that of our best novelists or poets or scholars, but it’s so much better than it was a year ago that I can’t imagine where it will be in five years. Right now, it performs like a highly competent copywriter, infusing all of its outputs with a kind of corny, consumerist optimism that is hard to eradicate. It’s bound by a handful of telltale syntactic tics. (And no, using too many em-dashes is not one of them!) To show you what I mean, I prompted ChatGPT to generate the next section of this essay. It invented a faculty scene, then continued:

Because the truth is: Yes, students are using A.I. And no, they’re not just using it to cheat. They’re using it to brainstorm, to summarize, to translate, to scaffold. To write. The model is there — free or cheap, available at 2 a.m. when no tutor or professor is awake. And it’s getting better. Faster. More conversational. Less detectable.

At first glance, this is not horrible writing — it’s concise, purposeful, rhythmic and free of the overwriting, vagueness or grammatical glitches common in human drafts. But it feels artificial. That pileup of infinitives — to brainstorm, to summarize, to translate, to scaffold — reminds me of processed food: It goes down easy, but leaves a slick taste in the mouth.Its paragraphs tend to be brisk and insistent. One giveaway is the clipped triad — “Faster. More conversational. Less detectable.” — which is a hallmark of ChatGPT’s default voice. Another is its reliance on place-holder phrases, like “There’s a sense of …” — it doesn’t know what human perception is, so it gestures vaguely toward it. At other times, the language sounds good but doesn’t make sense. What it produces is mimetic of thought, but not quite thought itself.

I came to feel that large language models like ChatGPT are intellectual Soylent Green — the fictional foodstuff from the 1973 dystopian film of the same name, marketed as plankton but secretly made of people. After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet? And yet they are sold to us not as Soylent Green but as Soylent, the 2013 “science-backed” meal replacement dreamed up by techno-optimists who preferred not to think about their bodies. Now, it seems, they’d prefer us not to think about our minds, either. Or so I joked to friends.


When I was an undergraduate at Yale in the 1990s, the internet went from niche to mainstream. My Shakespeare seminar leader, a young assistant professor, believed her job was to teach us not just about “The Tempest” but also about how to research and write. One week we spent class in the library, learning to use Netscape. She told us to look up something we were curious about. It was my first time truly going online, aside from checking email via Pine. I searched “Sylvia Plath” — I wanted to be a poet — and found an audio recording of her reading “Daddy.” Listening to it was transformative. That professor’s curiosity galvanized my own. I began to see the internet as a place to read, research and, eventually, write for.

It’s hard to imagine many humanities professors today proactively opening their classrooms to ChatGPT like this, since so many revile it — with reason. A.I. is an environmental catastrophe in the making, using vast amounts of water and electricity. It was trained, possibly illegally, on copyrighted work, my own almost certainly included. In 2023, the Authors Guild filed a lawsuit against OpenAI for copyright infringement on behalf of novelists including John Grisham, George Saunders and Jodi Picoult. The case is ongoing, but many critics of A.I. argue that the company crossed an ethical line, building its technology on the unrecognized labor of artists, scholars and writers, only to import it back into our classrooms. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement. OpenAI and Microsoft have denied those claims, and the case is ongoing.)


Meanwhile, university administrators express boosterish optimism about A.I., leaving little room for skepticism. Harvard’s A.I. Sandbox initiative is presented with few caveats; N.Y.U. heralds A.I. as a transformative tool that can “help” students compose essays. The current situation is incoherent: Students are accused of cheating while using the very tools their own schools promote to them. Students know the ground has shifted — and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one.

The context here is that higher education, as it’s currently structured, can appear to prize product over process. Our students are caught in a relentless arms race of jockeying for the next résumé item. Time to read deeply or to write reflectively is scarce. Where once the gentleman’s C sufficed, now my students can use A.I. to secure the technocrat’s A. Many are going to take that option, especially if they believe that in the jobs they’re headed for, A.I. will write the memos, anyway.

Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then — tentatively — to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?”


At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps “just to see.” And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing.

No wonder one recent Yale graduate who used A.I. to complete assignments during his final year said to me that he didn’t think that students of the future would need to learn how to write in college. A.I. would just do it for them.


The uncanny thing about these models isn’t just their speed but the way they imitate human interiority without embodying any of its values. That may be, from the humanist’s perspective, the most pernicious thing about A.I.: the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed.


At some point, knowing that the tool was there began to interfere with my own thinking. If I asked it to research contemporary poetry for a class, it offered to write a syllabus. (“What’s your vibe — are you hoping for a semester-long syllabus or just new poets to discover for yourself?”) If I said yes — to see what it would come up with — the result was different from what I’d do, yet its version lodged unhelpfully in my mind. What happens when technology makes that process all too available?

My unease about ChatGPT’s impact on writing turns out to be not just a Luddite worry of poet-professors. Early research suggests reasons for concern. A recent M.I.T. Media Lab study monitored 54 participants writing essays, with and without A.I., in order to assess what it called “the cognitive cost of using an L.L.M. in the educational context of writing an essay.” The authors used EEG testing to measure brain activity and understand “neural activations” that took place while using L.L.M.s. The participants relying on ChatGPT to write demonstrated weaker brain connectivity, poorer memory recall of the essay they had just written, and less ownership over their writing, than the people who did not use L.L.M.s. The study calls this “cognitive debt” and concludes that the “results raise concerns about the long-term educational implications of L.L.M. reliance.”

Some critics of the study have questioned whether EEG can meaningfully measure engagement, but the conclusions echoed my own experience. When ChatGPT drafted or edited an email for me, I felt less connected to the outcome. Once, having asked A.I. to draft a complicated note based on bullet points I gave it, I sent an email that I realized, retrospectively, did not articulate what I myself felt. It was as if a ghost with silky syntax had colonized my brain, controlling my fingers as they typed. That was almost a relief when the task was a fraught work email — but it would be counterproductive, and depressing, for any creative project of my own.

The conscientious path forward is to create educational structures that minimize the temptation to outsource thinking. Perhaps we should consider getting rid of letter grades in writing classes, which could be pass/fail. The age of the take-home essay as a tool for assessing mastery and comprehension is over. Seminars might now include more in-class close reading or weekly in-person “writing labs,” during which students can write without access to A.I. Starting this fall, professors must be clearer about what kinds of uses we allow, and aware of all the ways A.I. insinuates itself as a collaborator when a student opens the ChatGPT window.

As a poet, I have shaped my life around the belief that language is our most human inheritance: the space of richly articulated perception, where thought and emotion meet. Writing for me has always been both expressive and formative — and in a strange way, pleasurable.


I’ve spent decades writing and editing; I know the feeling — of reward and hard-won clarity — that writing produces for me. But if you never build those muscles, will you grasp what’s missing when an L.L.M. delivers a chirpy but shallow reply? What happens to students who’ve never experienced the reward of pressing toward an elusive thought that yields itself in clear syntax?

This, I think, is the urgent question. For now, many of us still approach A.I. as outsiders — nonnative users, shaped by analog habits, capable of seeing the difference between now and then. But the generation growing up with A.I. will learn to think and write in its shadow. For them, the chatbot won’t be a tool to discover — as Netscape was for me — but part of the operating system itself. And that shift, from novelty to norm, is the profound transformation we’re only beginning to grapple with.

“A writer, I think, is someone who pays attention to the world,” Susan Sontag said. The poet Mary Oliver put it even more plainly in her poem “Sometimes”:

Instructions for living a life:
Pay attention.
Be astonished.
Tell about it.

One of the real challenges here is the way that A.I. undermines the human value of attention, and the individuality that flows from that.

What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work. I am a writer because I know of no art form or technology more capable than the book of expanding my sense of what it means to be alive.

Will the wide-scale adoption of A.I. produce a flatlining of thought, where there was once the electricity of creativity? It is a little bit too easy to imagine that in a world of outsourced fluency, we might end up doing less and less by ourselves, while believing we’ve become more and more capable.


As ChatGPT once put it to me (yes, really): “Style is the imprint of attention. Writing as a human act resists efficiency because it enacts care.” Ironically accurate, the line stayed with me: The machine had articulated a crucial truth that we may not yet fully grasp.

As I write this, my children are building Legos on the floor beside me, singing improvised parodies of the Burger King jingle. They are inventing neologisms. “Gomology,” my older son announces. “It means thinking you can do it all by yourself.” The younger one laughs. They’re riffing, spiraling, contradicting each other. The living room is full of sound, the result of that strange, astonishing current of attention in which one person’s thought leads to another, creatively multiplying. This sheer human pleasure in inventiveness is what I want my children to hold onto, and what using A.I. threatens to erode.

When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I’ve stayed in uncertainty long enough to find out what I had initially failed to understand. This attention to the world is worth trying to preserve: The act of care that makes meaning — or insight — possible. To do so will require thought and work. We can’t just trust that everything will be fine. L.L.M.s are undoubtedly useful tools. They are getting better at mirroring us, every day, every week. The pressure on unique human expression will only continue to mount. The other day, I asked ChatGPT again to write an Elizabeth Bishop-inspired sestina. This time the result was accurate, and beautiful, in its way. It wrote of “landlocked dreams” and the pressure of living within a “thought-closed window.”

Let’s hope that is not a vision of our future."