WRDashboard

Fork Me on Gitlab

Articles

The Backing Bookworm

We All Want Impossible Things


After reading and loving Newman's book Sandwich (on audio) a couple of weeks ago, I promptly put this audiobook by the same author on hold at the library. I enjoyed parts of this book and while it felt very similar to Sandwich, I was surprised that it didn't hit me nearly as hard with its emotion and characters.
Based on its blurb, I expected an emotional read about two friends - one of whom is dying of ovarian cancer. Despite Edi having cancer, the story mainly focuses on Ash. And I didn't love her. She's a hard character to connect with - she's funny but she came off as juvenile and self-absorbed. Maybe this story didn't hit as hard as I had hoped because it's told solely from her perspective, but despite having her POV, it was hard to understand why Ash made the choices she did. I didn't like how the story became about her, not her dying friend.
The story is a bit convoluted as it jumps between memories from their decades long friendship, to Ash's family and love life. The story is generously sprinkled with Newman's sometimes irreverent sense of humour (which may not be to all tastes) and there are some poignant moments.
This shorter than expected story felt longer than in needed to be and I couldn't help but feel that the author was trying too hard to be quirky and funny when I wanted more focus on the poignancy of the long-standing friendship. 

My Rating: 3 starsAuthor: Catherine NewmanGenre: Contemporary FictionType and Source: eaudiobook from public libraryNarrator: Jane OppenheimerRun Time: 6 hours, 55 minPublisher: HarpeAudioFirst Published: November 8, 2022Read: Feb 3 - 5, 2025

Book Description from GoodReads: For lovers of Meg Wolitzer, Maria Semple, and Jenny Offill comes this raucous, poignant celebration of life, love, and friendship at its imperfect and radiant best.
Edith and Ashley have been best friends for over forty-two years. They've shared the mundane and the momentous together: trick or treating and binge drinking; Gilligan's Island reruns and REM concerts; hickeys and heartbreak; surprise Scottish wakes; marriages, infertility, and children. As Ash says, "Edi's memory is like the back-up hard drive for mine."

But now the unthinkable has happened. Edi is dying of ovarian cancer and spending her last days at a hospice near Ash, who stumbles into heartbreak surrounded by her daughters, ex(ish) husband, dear friends, a poorly chosen lover (or two), and a rotating cast of beautifully, fleetingly human hospice characters.

As The Fiddler on the Roof soundtrack blasts all day long from the room next door, Edi and Ash reminisce, hold on, and try to let go. Mean 

while, Ash struggles with being an imperfect friend, wife, and parent—with life, in other words, distilled to its heartbreaking, joyful, and comedic essence. 

For anyone who’s ever lost a friend or had one. Get ready to laugh through your tears.



Brickhouse Guitars

Julien Sublet OM #24 Demo

-/-

Code Like a Girl

How the JVM Manages Strings in Java.

In this article, I will explain Java tools such as JDK, JRE, and JVM, followed by Java Strings, the String pool, String immutability, the runtime memory area, the class loader, reference types, the class stack, the heap and stack, and why Java is platform independent.

Before discussing the JVM architecture, let’s first understand how a Java application runs on a computer.

Computers follow the von Neumann architecture, which is the foundation of modern computing. This architecture allows programs stored in memory to be executed as needed. Programs are executed by the processor, while applications are stored in memory. When a program runs, it is loaded into RAM for execution.

Normally, when an application is executed, the source code is compiled into machine code, which the computer can understand and execute. However, in Java, the Java compiler compiles the source code into bytecode instead of directly converting it into machine code.

Let’s see how it happen,

Machines cannot directly understand or execute bytecode, which is why the Java Virtual Machine (JVM) is used. The JVM is designed to interpret and execute bytecode. When Java code is written, it is first compiled into bytecode by the Java compiler. This bytecode is then loaded into the JVM. Inside the JVM, the Just-In-Time (JIT) compiler translates the bytecode into machine-specific instructions, which are then passed to the interpreter for execution. The interpreter executes these instructions and communicates with the processor to run the program. Throughout this process, the JVM optimizes the code, enhancing the efficiency and performance of the program over time.

♦What is JIT compiler?

The Just-In-Time (JIT) compiler is a component of the Java Virtual Machine (JVM) that plays a crucial role in improving the performance of Java applications. It works by translating bytecode, which is an intermediate representation of the Java code, into native machine code at runtime. This translation happens just before the code is executed, hence the name “Just-In-Time.”

As the JIT compiler processes the bytecode, it identifies and translates frequently executed instructions into machine code. Once these instructions are translated, they are stored in memory. This means that the same instructions do not need to be translated repeatedly during subsequent executions. By reusing the already translated machine code, the JIT compiler significantly reduces the overhead of interpretation, leading to faster execution and enhanced efficiency over time. This dynamic optimization process helps improve the overall performance of Java applications as they run.

Why java platform independent?

Java is considered platform-independent because its code is compiled into bytecode, which can run on any operating system with a Java Virtual Machine (JVM). Unlike other languages like C, where compiled code is specific to the operating system and processor architecture, Java’s bytecode can be executed on different operating systems without requiring recompilation. This allows Java applications to be developed once and run anywhere, making them highly portable and efficient across multiple platforms.

Ok now let’s see what is this JDK, JRE and JVM

  • JDK — Java Development Kit
  • JRE — Java Runtime Environment
  • JVM — Java Virtual Machine
♦JDK — Java Development Kit

The JDK (Java Development Kit) is a software development kit used for building Java applications. It contains both the JRE (Java Runtime Environment), which is required to run Java programs, and a set of essential development tools. These tools include the Javac compiler for compiling Java code, the Java launcher for executing applications, the Jar utility for packaging files into JAR format, and the Javadoc generator for creating documentation, among others.

There are multiple vendors providing different distributions of the JDK, each offering unique features and support options.

  • Oracle JDK — The official version from Oracle, available with commercial support.
  • OpenJDK — The open-source reference implementation of the Java platform.
  • AdoptOpenJDK (Eclipse Adoptium) — A widely used, free OpenJDK distribution.
  • Amazon Corretto — A production-ready, free JDK from Amazon.
  • IBM Semeru (J9 JDK) — IBM’s optimized OpenJDK distribution.
  • SAP Machine — SAP’s OpenJDK build for enterprise applications.
  • Red Hat OpenJDK — A supported OpenJDK version from Red Hat.
  • Azul Zulu — A performance-optimized OpenJDK with commercial support.

Each vendor provides variations of the JDK, with some offering long-term support (LTS) and additional optimizations suited for different use cases.

JRE — Java Runtime Environment

The JRE (Java Runtime Environment) is the core component needed to run Java applications. It includes the JVM (Java Virtual Machine), which is responsible for executing Java bytecode, as well as a set of essential class libraries and runtime tools. These libraries include standard packages like java.lang, java.util, java.math, and various other utilities that Java programs rely on during execution. While the JRE is sufficient for running already developed Java applications, it does not include the development tools needed for writing or compiling Java code. For development, the JDK is required.

JVM — Java Virtual Machine

The JVM (Java Virtual Machine) is an abstract computing machine that enables a computer to run Java programs as well as programs written in other languages that are compiled into Java bytecode. It serves as a set of rules and specifications that define how bytecode should be executed. This flexibility allows developers to create their own implementations of the JVM. Besides Java, languages such as Scala, Kotlin, Groovy, and Clojure can also be compiled into Java bytecode, enabling them to run on the JVM and benefit from its features, such as garbage collection and platform independence.

♦Image from GeeksforGeeks

In the image above, you can see various components of the JVM, including the Class Loader, JVM Memory (also known as the Runtime Memory Area), Stacks, Heap, and Execution Engine. These components play a crucial role in the execution of Java programs. I will now describe some of the most important components in detail. Let’s go through them one by one.

Class Loader

The Class Loader in Java is responsible for loading .class files (bytecode) into the JVM for execution. When Java code runs, it is first compiled into bytecode (.class files), and the Class Loader loads these files into memory. It has three main parts.

  1. Bootstrap Class Loader — Loads core Java classes from the rt.jar (like java.lang and java.util).
  2. Extension Class Loader — Loads classes from the ext directory (lib/ext in older JDKs).
  3. Application Class Loader — Loads classes from the application’s classpath (src or bin folders).

This hierarchy ensures Java programs can dynamically load and execute classes as needed.

Execution Engine

The Execution Engine is responsible for running the loaded bytecode in the JVM. It translates bytecode into machine code that the computer can understand. This process is handled by the Interpreter, which converts and executes bytecode line by line. To improve performance, the Just-In-Time (JIT) Compiler helps by compiling frequently used bytecode into native machine code, making execution faster.

Runtime Memory Area

The Runtime Memory Area in the JVM manages memory during program execution. The Heap is the most important and frequently monitored memory area, as it stores objects and dynamically allocated data. If the heap memory is insufficient, issues like OutOfMemoryError can occur. Proper monitoring and optimization of the heap help ensure smooth execution of Java applications.

Method Area, Stack and Heap

The Method Area in the JVM stores essential information needed to run a program. It includes details about class structures, method names, variables, and constructors. This area helps set up the environment required for executing Java programs by keeping metadata and shared runtime information.

The Stack in the JVM follows an ordered structure for method execution, known as the Call Stack. When a method is called, a new frame is pushed onto the stack, and when the method completes, its frame is removed. This order ensures that methods execute and return in a structured manner, managing function calls efficiently.

The Heap in the JVM is where objects and their references are stored. When an object is created using the new keyword, it is allocated memory in the heap. This area allows dynamic memory allocation and is managed by Garbage Collection to free up unused objects.

Ok now let’s see how Strings handle in the JVM…………

In Java, Strings are objects and belong to the reference data type. They are immutable, meaning their values cannot be changed once created. Since Java programs create many string objects, a String Pool is introduced inside the Heap to optimize memory usage. The String Pool stores unique string literals, ensuring that duplicate strings share the same memory reference instead of creating new objects. This improves efficiency and reduces memory consumption.

Reference Types

In Java, objects are stored in the heap, while their references are stored in the stack. A reference is a memory address that points to the object’s location in the heap. By default, object references are null until assigned. When an object is passed to a method, it is passed by reference, meaning changes made inside the method affect the original object. This allows Java to efficiently manage memory and object manipulation.

String Pool

In Java, Strings are one of the most commonly used reference types. Normally, objects are stored in the Heap, but to efficiently manage String objects, Java introduces a String Pool inside the Heap.

When a String is created using a string literal, it is directly stored in the String Pool. If the same string already exists, Java reuses the existing reference instead of creating a new object, saving memory.

When a String is created using the new keyword, it is first stored in the Heap as a separate object. Then, Java checks if the same value exists in the String Pool — if it does, the reference is linked to the existing string; otherwise, a new entry is added to the pool.

♦Image from GeeksforGeeks

This String Pool mechanism helps improve memory efficiency by avoiding duplicate String objects and reducing Heap usage.

In this explanation, I have mentioned String immutability. Now, let’s talk about what String immutability means…..

String immutability

String immutability in Java means that once a String object is created, its value cannot be changed.

String str = "Hello";
str = str + " World";

In this example, the original string "Hello" remains unchanged. Instead, a new String object is created with the value "Hello World", and the variable str now references this new object. The original string "Hello" still exists in memory, demonstrating that String objects are immutable; modifications result in new objects rather than altering the existing ones. This immutability ensures thread safety and improves performance by allowing string literals to be reused from the String Pool.

To make Strings mutable, we can use StringBuilder and StringBuffer. In my previous article, I have explained about them.

Understanding StringBuffer and StringBuilder in Java.

In the context of String immutability and the String Pool, when a String object is created in Java, it first checks the String Pool to determine if an identical string already exists. If the string is found in the pool, the reference to the existing string is returned, and no new object is created. This reuse of references helps conserve memory.

String str1 = "Hello";
String str2 = "Hello";

Both str1 and str2 point to the same String object in the String Pool, illustrating immutability since neither can be modified. If you attempt to change the value of str1

str1 = str1 + " World";

A new String object is created with the value "Hello World", and str1 now references this new object, while the original "Hello" remains unchanged in the pool. This mechanism not only ensures that String objects are immutable but also optimizes memory usage by preventing duplicate string instances in the pool.

Okay, we’ve talked more about the JVM architecture, and there are many more aspects to discuss as well. But for a summary today, in this article, I have explained the JVM, JDK, JRE, as well as the main components in Java memory management, such as the heap, stack, reference types, and the String Pool. I hope you enjoyed this article and learned something new. For more, stay tuned with me. Good luck with your coding journey!

Goodbye, and best wishes on your coding adventures! 👋💻

How the JVM Manages Strings in Java. was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

Does a WordPress website have to be dull?

Many people think WordPress websites look generic and uninspired — but that’s only true if you don’t take the time to personalize them. Just picking a nice theme and color scheme isn’t enough. The key to making a website feel unique, professional, and memorable is in the details.

Here’s how to break out of that template-based look and make your WordPress website stand out with personality and professionalism.

1. Pick the Right Images (Don’t Rush This Step!)

One of the biggest mistakes people make is choosing stock photos randomly or sticking with whatever comes with the template. Images set the tone of your website, and picking the right ones can completely change the emotional impact your brand has.

✔ Use images that evoke emotions — Think about how you want your visitors to feel when they land on your site. Happy? Inspired? Calm? Choose images that match those emotions.

✔ Search beyond the first page — Spend time digging into stock image libraries (both free like Unsplash and premium like Shutterstock) to find visuals that truly fit your brand’s personality.

✔ Use vector illustrations and icons — Instead of generic icons, go for something unique. Check out Flaticon, IconFinder, or custom SVGs to ensure even small visual elements feel crafted.

✔ Make images cohesive — If your images have different lighting, tones, or quality, they’ll feel disconnected. Edit them to maintain a consistent aesthetic.

2. Fonts: The Unsung Hero of Branding

Typography is one of the biggest giveaways of a standard website vs. a polished one. Your font choices define your brand’s voice.

✔ Pick two (or three) fonts — Typically, you’ll want one font for titles/headings (bold and attention-grabbing) and another for body text (clean and readable). You can mix a third one for accents, but don’t go overboard.

✔ Ensure they fit together — Fonts should complement each other. Use a combination tool (like FontPair or Google Fonts suggestions) to test different matches.

✔ Match the brand’s personality — Are you running a serious corporate site? A soft, friendly blog? A tech startup? Your fonts should align with this. Ask AI (like ChatGPT!) to suggest font pairings based on your brand’s personality.

✔ Test before committing — Use a font tester tool (like Wordmark.it) to preview fonts on your actual text before applying them.

3. Block Separators & Section Transitions: The Small Detail That Changes Everything

Ever noticed how really polished sites don’t just have plain sections stacked on top of each other? The transitions between sections flow smoothly, rather than feeling like separate blocks.

Most builders like Elementor and Divi come with options for section separators, but many people leave them as the boring default.

✔ Use curves, angles, or waves — Instead of sharp, straight-cut sections, add curves or diagonal cuts to make transitions more dynamic.

✔ Match the theme’s personality — If your site is playful, use wavy transitions. If it’s corporate, go for sharp diagonal breaks.

✔ Create depth with layering — Adding a subtle parallax effect or overlapping sections can make your website feel more dynamic and engaging.

4. Animations: Small Movements, Big Impact

Animations give life to your website without making it feel overwhelming. Most WordPress sites rely on basic fades and slides, but a well-thought-out animation can tell a story as users scroll.

✔ Loading animations — Small details like a custom loading animation can set the tone the moment a user arrives.

✔ Looping elements — A moving icon, a gently animated background, or a breathing button can add life without being distracting.

✔ Scroll-triggered animations — Elements that animate as the user scrolls create a more immersive experience. Example: If it’s a cooking site, imagine spices floating in as you scroll. If it’s a construction site, imagine a house assembling itself.

✔ Subtle hover effects — Even small CSS-based hover effects can make interactions feel smooth and polished.

5. Colors: Feel It, Don’t Just Follow Rules

Choosing colors isn’t just about sticking to a predefined palette. Sometimes, breaking the rules leads to a more powerful design.

✔ Don’t always follow strict palettes — Sometimes, a mix of vibrant colors works better than a limited three-color scheme. Other times, a minimalist black-and-white approach (like for a photography website) creates more impact.

✔ Use color generation tools — Websites like Coolors.co help you experiment with different combinations and find colors that work together.

✔ Step outside the norm — Some brands thrive on unexpected colors. If a rainbow of shades fits your theme, use it. If everything in grayscale enhances your message, go for it.

✔ Test and feel the impact — Color isn’t just visual; it evokes emotion. Play around and see what resonates best.

6. Mood Board: Your Website’s Visual DNA

If you have the time, creating a mood board before designing your website can make a huge difference. A mood board helps you bring together all the elements that define your site’s aesthetic in one place.

✔ Include fonts, colors, icons, and images — Gather all the visual elements that represent your brand’s personality.

✔ Think of the feeling you want to create — Does your website need to feel bold and dynamic? Soft and elegant? A mood board helps lock in the right vibe.

✔ Use online tools — Platforms like Pinterest or Milanote are great for organizing ideas and inspiration.

✔ Half the job is done — When you finalize your mood board, you’ve already created the core visual identity of your website. The rest is just execution.

Final Thoughts: No, WordPress Doesn’t Have to Be Dull

Your website’s uniqueness doesn’t depend on the platform — it depends on how much thought you put into the details. WordPress gives you all the tools you need, but it’s your creative choices that will make it stand out. Personalize your site, experiment with colors, fonts, and images, and take advantage of animations and layouts. The result? A site that looks custom-built and professional — without feeling dull or generic.

Does a WordPress website have to be dull? was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Agilicus

Add Remote Operations: Support Clients Faster & Reduce Costs

-/-

Brickhouse Guitars

Boucher GR SG 162T GR ME 1041 D. Demo by Roger Schmidt

-/-

Grand River Rocks Climbing Gym

Family Day

The post Family Day appeared first on Grand River Rocks Climbing Gym.


Elmira Advocate

ANOTHER HISTORY LESSON - i.e. METHANE

 

The Woolwich Observer have a generally pretty good Editorial dated August 4, 2017 and titled "History of the area puts microscope on methane". Yes there are a couple of errors as the Observer appear to go along with Woolwich's methane propaganda about the quality of studies and focus on safety . Actually almost none of that is present.

Overall the Editorial describes the length of time that methane has been present and the seriousness of it.  What it doesn't do is point out the many errors and gaps in sampling that make the area such a concern. Also this Editorial appears to sell the idea that the long standing hydrocarbon contamination is somehow urgent. It's been there for decades as well as leaking into the Canagagigue Creek for decades as well. Absolutely nobody including the Ontario Ministry of Environment seem remotely concerned about it.  Now suddenly it's an issue or are the Township simply trying to use it as a bargaining chip in their fight with Frank Rattasid, the current owner of  the property?

Seven or eight years later and little to nothing has been done. Par for the course for Woolwich Township. I still expect that there will be a bang at some point in time and can only hope that damage to a building is minimal and damage to human beings is not at all. Hope springs eternal.


Capacity Canada

Children’s Treatment Network (CTN)

♦ Champions wanted! Children’s Treatment Network (CTN) invites you to apply for a volunteer Board Director or Community Committee Member position. Our exceptional volunteers use their lived experience, leadership skills and professional expertise to make a difference for over 33,000 kids and youth with disabilities and developmental needs and their families. Our innovative network operates through partnerships with public and private organizations in the health, education, community and social service sectors, which makes governing our organization a unique experience.

CTN’s vision is a vibrant community where all kids, youth and families belong. As part of our commitment to equity, diversity, inclusivity, Indigeneity and accessibility (EDIIA), our goal is to foster a governance structure that reflects the communities we serve. We encourage applications from individuals with disabilities, those who identify as Black, Indigenous or persons of colour, members of the LGBTQ2S+ community, those with lived experience as a family member or caregiver of an individual with a disability or developmental needs and those who want to contribute to our mission and achieve our vision.

We are seeking volunteers who are interested or have experience on a volunteer board of directors, align with CTN’s organizational values, will support governance needs and contribute to achieving CTN’s strategic goals. Our board reflects the diversity of communities we serve and includes people with a range of skills and experience.

Expectations: Board Director  
  • Three-year term of office
  • Volunteer approx. six hours a month
  • Sit on a board committee
  • Share your leadership, expertise and governance experience at the board level
Community Committee Member 
  • One-year term of office
  • Volunteer three hours a month
  • Participate on a board committee
  • Share your expertise
  • Build or share your governance experience
The deadline to apply is March 3, 2025. Fill out the application  We are happy to honour accommodations during any part of the application process and invite you to let us know how we can help. About CTN:

CTN supports over 33,000 kids and youth with disabilities and developmental needs, primarily in York Region and Simcoe County, and delivers school-based rehabilitation services in Central and West Toronto. We provide intake, service navigation, rehabilitation services (including physiotherapy, occupational therapy and speech language therapy), specialized clinics and coordinated service planning, assessment and diagnostic services as well as some autism services. Our clients have a variety of diagnoses including learning disabilities, autism, developmental, neurological and physical disabilities.

CTN is a children’s treatment centre funded by the Ministry of Children, Community and Social Services. Our network operates through partnerships with service providers in the health, education, community and social service sectors. Together with our partners, we work towards making our vision of a vibrant community where all kids, youth and families belong a reality. CTN’s commitment to providing family-centred care is anchored by a shared client record that is accessed across partner organizations and provides the foundation for integrated plans of care and services.

The post Children’s Treatment Network (CTN) appeared first on Capacity Canada.


Agilicus

VPN Alternatives for Water and Critical Infrastructure

-/-

Agilicus

Add wastewater remote operations streamline service

-/-

Hoesy, Michalos & Associates

Can I File Bankruptcy a Second Time in Canada?

Life after bankruptcy doesn’t always go as planned. Despite your best efforts to maintain financial stability, you might find yourself struggling with debt again. If you’ve previously filed bankruptcy in Canada, you may be wondering if filing a second time is possible and what it entails.

Yes, you can file bankruptcy more than once in Canada. However, the bankruptcy process, timeline, and implications differ significantly from your first bankruptcy. Before making this decision, it’s important to understand what’s involved and explore all available options.

Can You File Bankruptcy More Than Once?

The Bankruptcy and Insolvency Act allows you to file bankruptcy in Canada more than once. To qualify for a second or subsequent bankruptcy:

  • You must be discharged from your previous bankruptcy
  • There must be no conditions on your previous discharge that would prevent a new filing
  • You must be able to demonstrate your current inability to repay your debts

Your Licensed Insolvency Trustee (LIT) plays a crucial role in repeat bankruptcies. They’ll assess your eligibility, review your financial situation, and ensure you understand all implications before proceeding with another personal bankruptcy filing.

How a Second Bankruptcy is Different than the First

When you file bankruptcy for a second time, there are several important differences you need to understand:

Longer Discharge Time: It will take longer to receive you bankruptcy discharge for a second bankruptcy than if this was your first time filing. A first bankruptcy typically lasts 9 to 21 months, while a second bankruptcy lasts at least 24 months (or 36 months if you have surplus income).

Higher Total Cost: Because a second bankruptcy lasts longer, you will make monthly payments for a lot longer. This means filing bankruptcy a second time will cost more than a first bankruptcy and this can become quite expensive if you have surplus income payments.

Longer Credit Report Impact: A second bankruptcy remains on your credit report for 14 years from the date of discharge, compared to 6 years for a first bankruptcy. This extended reporting period can significantly impact your ability to rebuild credit.

Additional Requirements for Third Bankruptcies

A third bankruptcy is significantly more complex than previous bankruptcies. The key difference when filing bankruptcy a third time is that there is no automatic discharge – you must attend a mandatory discharge hearing where a judge will review your case.

The court has considerable discretion and may impose specific conditions for discharge, suspend your discharge for a period of time or refuse your discharge altogether. In deciding, the bankruptcy court will evaluate several factors including your conduct during the bankruptcy, your income and payment history and the circumstances that led to your debts.

Given these serious implications, it’s highly recommended to explore alternatives before pursuing a third bankruptcy.

Filing a Consumer Proposal After Bankruptcy

Before proceeding with another bankruptcy, consider a consumer proposal. You can file a consumer proposal after bankruptcy for new debts if you have been discharged from your previous bankruptcy.

A consumer proposal is often a better alternative to filing bankruptcy twice since it:

  • Has less impact on your credit report (at most six years from filing versus 14 for a second bankruptcy)
  • Provides more flexible payment options and has lower monthly payments
  • Once accepted, there is not risk that a creditor or the court may oppose your discharge
  • Allows you to keep your assets

Your Licensed Insolvency Trustee can help you compare a consumer proposal and bankruptcy as well as look at other options including debt consolidation or a debt management plan.

Deciding if Another Bankruptcy is Your Best Option

Before making this decision, consider:

  • The extended bankruptcy discharge period
  • The long-term impact on your credit rating
  • Whether you’ve addressed the underlying causes of your financial difficulties
  • If you can manage a consumer proposal instead

Many people believe that filing bankruptcy multiple times means they’ve failed. However, sometimes circumstances beyond your control – job loss, illness, or family emergencies – can derail even the best financial plans.

The most important step is getting professional advice about your options. At Hoyes Michalos, we can review your situation and help you understand all available debt relief options. Contact us today for a free, confidential consultation to discuss whether a second bankruptcy is right for you or if another solution might work better for your situation.

Book Your FREE Consultation

The post Can I File Bankruptcy a Second Time in Canada? appeared first on Hoyes, Michalos & Associates Inc..


Code Like a Girl

How A Simple, Stupid Error Keeps Crashing My App

A Debugging Tale of Crashes, Clues, and Careless Mistakes

Continue reading on Code Like A Girl »


Code Like a Girl

What Is The Difference Between Quantum Computers And Supercomputers?

Hint: It’s not their size

Continue reading on Code Like A Girl »


Code Like a Girl

When Will It Be Ready?

Estimations Aren’t the Enemy — Unless You Use Them Wrong

Continue reading on Code Like A Girl »


James Davis Nicoll

A Devil Like You / To Reign In Hell By Steven Brust

Steven Brust’s 1984 To Reign in Hell is a stand-alone fantasy novel set in the Christian shared universe.

Existence is a sea of chaos, in which anything might appear… briefly, before dissolving. When the firstborn angels — Yaweh, Satan, Michael, Lucifer, Raphael, Leviathan and Belial — manifested, they possessed a will to live and the power to fend off the chaos.

Heaven was their refuge, an artificial realm of stable laws safe from corrosive chaos. However, Heaven was flawed.



Cordial Catholic, K Albert Little

This is what Protestants Misunderstand about Worship #catholicchurch #bible #christian #biblestudy

-/-

Aquanty

Output Peclet Number

This post highlights a key tool for evaluating solute transport and density-dependent flow models: the output peclet number command. When building these models, a common approach is to first establish a steady-state flow solution, then validate transport using flow outputs as initial conditions, and finally introduce density dependence if needed. The output peclet number command calculates the grid Péclet number (Pe), helping identify areas where numerical dispersion or unstable transport solutions may occur. Keeping Pe below 2 is generally recommended, with mesh refinement as the primary method for reducing high values. This tool is invaluable for diagnosing transport issues before running long-term scenario simulations.

Figure 1: Peclet number distribution - coarse mesh

For those of us building solute transport or density-dependent flow models, we know from experience that these features add significant model complexity and can lead to major headaches!

A common approach to building these models is to

Figure 2: Evidence of numerical dispersion

  1. Get a good flow model working first (spin up to steady state),

  2. Get a good transport model working, using your flow model outputs as initial conditions. If your transport solution collapses, go back to step 1 and refine your model.

  3. (Optional) Add density dependence, and revisit flow again if your model blows up.

  4. Run your scenario analysis simulations.

Figure 3: Peclet number distribution - fine mesh

If you’re at the stage where you’re trying to evaluate your transport model, consider using the output peclet number command. Output peclet number calculates the grid Péclet number (Pe) for your mesh, using flow rates calculated from your initial conditions. The results will be included in your prefixo.pm.dat file for visualization in Tecplot.

Pe = (L*v)/D = advective transport rate / dispersive transport rate

Figure 4: Solute distribution after 30 years - fine mesh

Here, L represents the mesh resolution in the direction of flow, v represents the local flow velocity, and D is the diffusion coefficient. The general guideline is to try and keep Pe below 2, and your best course of action to lower Pe is to refine your mesh in areas where it is too large. The value of 2 is not a hard cap, but when the value of Pe is much greater than 2 you may start to see numerical dispersion or incorrect transport solutions, such as negative concentrations and concentrations exceeding source concentrations.

Here’s an example from a 2D cross-section model where a solute is being released beneath a hillslope. There were no issues with the flow solution at a resolution of 10 m (x) and 1 m (z), but after adding transport and using the output peclet number command, I found grid Péclet numbers > 6 near the source, and values >4 elsewhere along the hill (Figure 1)

Then when I looked at the solute concentrations, I started seeing numerical dispersion (Figure 2; surprise surprise…)

To make sure the transport solution would not break down, I rebuilt the model with a resolution of 2.5 m (x) and 0.5 m (z). With that change, Pe was much closer to 2 across the hill (Figure 3). I was happy enough with that to simulate solute transport out to 30 years.

Note: If you are new to transport or density modelling (especially density!), it’s never a bad idea to start with a 2D cross-section model and work towards full 3D later if you need to.


Aquanty

New Commands to Report Water Table Depth/Elevation

This post introduces two new commands in HydroGeoSphere (Revision 2270): Report water table at xy and Report water table at node. These commands allow you to report the water table at a specific location, either by node or XY coordinates. When using these commands, HydroGeoSphere identifies the nearest vertical column of nodes and interpolates pressure head values to report the first location where the pressure head is 0, which is particularly useful for detecting perched water tables. These commands provide a simple and efficient way to track water table elevations at specific points in your model.

Figure 1: Report Water Table command entries from the Reference Manual (page 271)

This weeks post is a quick one to highlight two new commands that have been included in the June 2021 release of HydroGeoSphere (Revision 2270). The commands in question are Report water table at xy and Report water table at node.

It’s immediately clear that these commands are used to report the water table at a given node or XY location. Let’s take a closer look at how these commands are applied, the resulting output and some potential pitfalls of applying these commands. As usual, an example problem is referenced throughout this post.

Download the example problem (Abdul_WaterTable) here: Abdul_WaterTable.zip

Figure 1 shows the Reference Manual entries for these new commands, and as you can see they are very easy to apply. To report the water table simply apply the command in your *.grok file, followed by a descriptive name to be used in the output file (e.g. “Point 1”, followed by the node number OR the X/Y coordinates of the location in question.

Please note that a Z-coordinate is not required here, as all nodes in a vertical column will typically share the same water elevation. However, there may be some cases where two water tables would be present at the same location (i.e. with a perched water table). To understand what happens in this situation it’s helpful to know exactly how this command works.

Figure 2: Application of `Report water table’ commands in the .grok file

When using these commands HGS will identify the nearest vertical column of nodes and begins interpolating the pressure head values between nodes starting from the top and working downward, and will report the first location where the pressure head is 0. Therefore, if there is a perched water table the Report water table commands will identify the upper water table.

Now let’s see an example of these commands in action. Using the Abdul_WaterTable example problem (download link above), we can see both versions of the command applied at the bottom of the *.grok file (see Figure 2 below). Note that both versions of the command are included, although the Report water table at node versions have been commented out to avoid repetition. Either set of command would result in the same output (although with modified file names, i.e. Point1 vs. Point3).

When the model is run the resulting water table elevations are written to the files ‘abdul_WaterTableo.water_table.Point1.dat’ and ‘abdul_WaterTableo.water_table.Point2.dat’, corresponding to observation points 1 and 2 (see Figure 3 below).

If we open ‘abdul_WaterTableo.water_table.Point1.dat’ we can see that the resulting output includes both the depth to water table (based on the surface layer elevation) and the overall water table elevation at every timestep:

Figure 3: Location of ‘Abdul_WaterTable’ observation points

Figure 4: “Report water table at xy” output file


Aquanty

Blanking by element number with TecPlot for hydrogeological models

This post introduces a workflow for visualizing individual model layers in TecPlot using a custom macro. The workflow overcomes the limitations of TecPlot's built-in “Value Blanking” functionality, which can produce artifacts in complex scenarios, such as when zones are out of order or cross layer boundaries. The proposed method blanks layers based on element numbers, ensuring accurate visualization of each model layer. We find this approach particularly useful when you need to isolate and explore specific layers in a model, making it ideal for producing a clear, detailed "walk-through" of your model.

TecPlot offers built-in “Value Blanking” functionality which can be used for quick investigation of individual model layers. This functionality is however somewhat lacking in complex situations. For example, if a model is built with zones out of order from their layer order, or if zones cross layer bounds, TecPlot’s value blanking by zone can produce artifacts. The basic issue is that zone or property numbering may not accurately define layers in a model.

A useful tool for properly isolating layers in a model is presented below. It bases blanking on element number, and truly captures the full extent of each layer in a model. All you need is the attached TecPlot macro (IJK_Blanking.mcr), and some information on your model layering and mesh. The attached excel sheet (IJK Blanking Spreadsheet.xlsx) will help organize this information and provide necessary values for the blanking macro.

Download the necessary files here

Step-by-step instructions:

1. Prepare Data

Figure 1: Prepare data

Load 3D model data (i.e. "R5.dat") into TecPlot. For the purposes of layer exploration, the mesh to tecplot HGS command, utilized in grok, will output the smallest 3D mesh file. The "R5.dat" file included in the example files was produced using the mesh to tecplot command.

Figure 2: Confirm new variable

  • In TecPlot: Data > Alter > Specify Equations (Figure 1)

  • In the Equation(s) text box enter: {b} = 1

  • Change the New var location pull down box to Cell center.

  • Press Compute, which should result in a successful message.

  • Confirm that new column titled “b” has been created by viewing Data > Spreadsheet…, make sure to select Cell center under the Value Location pulldown menu (Figure 2)

2. Load Macro

  • Go to: Scripting > Play Macro/Script…

  • Select the “IJK_Blanking.mcr” macro, the location of the macro on your system is not critical.

  • If you don’t see the “Quick Macro Panel” on the right of the TecPlot window, go to: Scripting > Quick Macros

  • You should now see a list of macros in a panel on the right of the TecPlot window, one of which should be “IJKBlankZone”. (Figure 3)

3. Prepare Data for Macro

  • Open: “IJK Blanking Spreadsheet.xlsx”. Input data for the macro will be prepared here. Peach coloured cells indicate values you must fill in based on your specific mesh. You can find the required information from a number of sources including the HGS project *.eco file (e.g. total # of elements/layers), within TecPlot, or the *.amproj file if you used AlgoMesh to create the model mesh.

  • The values presented in the lower portion of the spreadsheet will provide the values you need to enter, in order, while running the macro. (NOTE: “Zone to Blank” refers to TecPlot zone, which is associated with a timeseries, it is not the model zone.)

  • The attached spreadsheet contains all the correct information for the "R5.dat" file.

Figure 3: New IJKBlankZone macro

4. Run Macro

  • Double-click on IJKBlankZone in the Quick Macro Panel (Figure 3)

  • Enter the values as indicated in the spreadsheet

5. Blank out undesired layers (Figure 4)

  • Go to: Plot > Blanking > Value Blanking

  • Select Include value blanking

  • Under Blank entire cells when:, select primary value is blanked

  • For Blank when select b and is equal to constant 0 (zero)

  • Activate by clicking the Active checkbox.

Optionally, you may repeat the steps of Step 4: Run Macro adjusting the “Bottom Blank” and "Top Blank" values to blank successive layers. By default TecPlot does not plot the top of a volume in the middle of a model, leading to a concave look in some situations, if you are seeing this:
Zone Style… > Surfaces > Surfaces to Plot -> Exposed cell faces

If you followed the example above closely you should see results as illustrated below (i.e. blanking layers 1 & 2; Figure 5).

Perhaps the most useful application of this technique is to successively blank layers down from the top of your model to produce a layer "walk-through". Hopefully you find this feature useful, and please feel free to post any questions or comments below.

Figure 4: Value Blanking dialogue window

Figure 5: Resulting blanking based on element number


KW Predatory Volley Ball

Congratulations 13U Validus. 15U McGregor Cup Trillium Green A Gold

Read full story for latest details.

Tag(s): Home

Aquanty

Integrating HGS Models with PEST for Automated Parameter Estimation

This post introduces a detailed tutorial on integrating PEST (Parameter ESTimation) with HydroGeoSphere (HGS) for automated parameter estimation. The tutorial walks through the structure of PEST input files and guides you on how to incorporate them into your HGS models. It's designed to help you run parameter estimation using PEST, though it doesn't cover advanced PEST modes like Tikhonov regularization or predictive analysis. The tutorial is based on the Abdul verification problem and includes all necessary input files. By following the steps outlined, you can run PEST to optimize parameters in your HGS model with ease.

We’re pleased to provide a fully documented tutorial on how to integrate the automated parameter estimation processes offered by PEST into your HydroGeoSphere models. This document carefully reviews the structure of PEST input files and discusses how you would integrate these PEST files with your HydroGeoSphere model. Please note that this tutorial is primarily geared toward the parameter estimation mode, and does not cover more advanced PEST modes such as Tikhonov regularization, Singular Value Decomposition (SVD assist), predictive analysis mode, or pareto mode.

If you find this tutorial helpful and are interested in further information regarding the advanced methods listed above please provide feedback to help guide me toward more useful topics.

Download the tutorial documentation here: Using PEST to Calibrate a HydroGeoSphere Model.pdf
Download the example problem here: PEST_tutorial.zip

This example problem is based on the Abdul verification problem (of course!) and includes all the necessary PEST input files. Strictly speaking, all you need to do to run the PEST parameter estimation is:

  1. Download the example files

  2. Run the Abdul_PEST model through HGS (i.e. run grok.exe then phgs.exe)

  3. Copy all files from the 'PEST Input files' folder into the model folder

  4. Open the command line and type 'pest.exe Abdul_PEST.pst' and hit enter

PEST will then initiate the optimization algorithm, which may take a while to run (it depends on your computer specs; on my 16-core machine it only took 30-35 minutes). The optimization algorithm will run through >75 model runs over the course of 7 optimization iterations.

The tutorial documentation provides additional insights on the structure of PEST input files and how you can modify this ready-made example project to your own HGS models (as well as tips/tricks/best practices). Images below come from the tutorial file itself, and illustrate the key variables in the PEST control file (Figure 1&2), input template files (Figure 3) and output reading instruction files (Figure 4). For more detailed information see the tutorial documentation.

We hope you find this PEST 101 document helpful! Please let us know if anything is unclear, or if there are additional topics of interest related to PEST that we can expand upon.

Figure 1: PEST Control File (.pst) Format Part 1

Figure 2: PEST Control File (.pst) Format Part 2

Figure 3: Input Template File (.tpl) Format

Figure 4: Output Reading Instruction File (.ins) Format


Aquanty

Zero Order Source with Partitioning

This post introduces the new zero-order source with partitioning command, added to HGS in revision 2291 (August 2021 update). This command enables more realistic modeling of gaseous species production in the unsaturated zone (USZ), considering the partitioning between the aqueous and gas phases based on solubility and water saturation. The new command improves upon the previous zero-order feature by scaling production smoothly, using a partition coefficient and water saturation, instead of a simple on/off function. We find this command especially useful for modeling species like 222Rn, where equilibrium between gas and aqueous phases is critical for accuracy in simulations.

The objective of this new command is to consider the production of gaseous species within the unsaturated zone (USZ) in a more realistic manner than is currently achievable with HGS. Gaseous species produced within the USZ (e.g. 37Ar or 222Rn) will partition between the aqueous and gas as a function of their solubility in water and water saturation. The native HGS zero-order feature leads to high concentrations in the USZ, as produced species are dissolved in a comparatively smaller volume of water; in reality, the concentrations will sometimes be lower, due to the partitioning in the gas phase described above.

A saturation threshold command was added last year (Oct. 2020), which switches off zero-order production when water saturation falls below a user-specified value. This feature is particularly well-suited to species with low solubilities, and those which are in close contact with the atmosphere (shallow subsurface). However, for species which are more soluble (e.g. 222Rn) and/or produced in the USZ at depths of more than a couple of meters, it is more realistic to assume an equilibrium between the gas and aqueous phases, as a function of both the partition (or Henry) coefficient and the water saturation.

Figure 1: Effective production rate

Based on a simple instant-equilibration mass-balance model, the new ‘zero order source with partitioning’ command scales production according to the following relationship

Figure 2: Command description for “zero-order source with partitioning”

The partition coefficient (Hcc) is defined by the user and can be considered a constant (although it does vary with temperature in reality). This equation assumes that every produced atom in the USZ will partition immediately into both gas and aqueous phases according to this coefficient.

The result is that the zero-order source is produced more smoothly as a function of saturation, rather than use a simple on/off function. It remains quite simplistic, as it assumes completely static conditions in the gas phase, and also assumes a constant partition coefficient, which is synonymous with isothermal conditions.

Let’s take a closer look at the new command in action. To use this command in your *.grok file you must specify a number of panels for a time-value table, and for each panel you will specify the time on, time off, the mass production rate under fully saturated conditions, and the partition coefficient. Note that production rate and partition coefficients must be specified for all species in your simulation. A detailed description of the command is included in section 2.7.7.6 of the reference manual, and is reproduced in Figure 2 for your convenience:

Figure 3 illustrates the impact that the zero-order source with partitioning command has when compared to the original zero-order source and zero-order source with saturation threshold commands. The figure below shows the 222Rn activity as a function of distance from the top of the column when all three variations of the command are applied:

Figure 3: Results from three different versions of the “zero order source” commands

You can reproduce these results yourself using the example project below, which includes all three versions of the ‘zero order source’ commands (although only one should be active at a time).

This simple simulation applies steady-state infiltration from top of a 1-D column, with zero-order source and first order decay corresponding to 222Rn production and disintegration. The column is 5 meters in length, and discretized into 100 elements. A constant rain flux of 0.05 m/d is applied to the upper boundary, and a constant head of 2 m is applied to nodes at the bottom (z = 0) of the column. The column flows under unsaturated conditions until approximately 2.8m from the inlet. van Genuchten parameters are given in .mprops file. Figure 4 illustrates the correct application of the zero-order source with partitioning command.

Download the example project here.



Aquanty

Nodal flux reduction by pressure head boundary condition modifier

♦ ♦

Figure 1: Simple Drain and Makeup Water box model

This post describes how to use the flux nodal boundary condition modifier: nodal flux reduction by pressure head. This modifier is used when the flux rate is set higher than the available water, which can cause numerical instability and model crashes. By applying the modifier, the flux rate is reduced according to the pressure head, allowing the model to remain stable even when water availability fluctuates. We find this modifier particularly useful when simulating conditions where the water extraction needs to be limited based on available pressure, ensuring more realistic and stable simulations.

For this post we use the same model that we used to demonstrate the simple drain and makeup water boundary conditions (Simple Drain and Makeup Water Boundary Conditions).

The model (see Figure 1) is a simple box model with a depression (pond in the middle). The lateral boundary conditions at x = 0 m and x = 50 m are specified head boundary conditions that fluctuate between 7 and 10 m on a 30-day cycle as follows:

time value table
0      9
30     7
60     10
90     7
120    10
150    7
180    10
210    7
240    10
end

Figure 2: Command description

This range of fluctuation is enough to move water in and out of the depression. In the middle of the depression a flux nodal boundary condition has been assigned with a rate of -2 m3/day. This rate was selected as it exceeds the available amount of water to be pumped during the periods of the simulation where the outer head boundary conditions have a value of 7.

Model Setup:

For this example the simple drain boundary condition has been replaced with a flux nodal boundary condition with the nodal flux reduction by pressure head modifier.

!--------------------- Flux Nodal Boundary Condition
use domain type
surface


clear chosen nodes

choose node
25 0 8

create node set
n_nodalflux

boundary condition 
	type
	flux nodal

	node set
	n_nodalflux

	time value table
	0 -2
	end

    nodal flux reduction by pressure head
    0.01
    0.1 
end

Figure 3: Automatic flux reductions based on pressure head signal

Results:

Figure 3 shows how the flux rate changes over the course of the simulation. When water is available the extraction rate is -2 m3/day, however as the available water decrease (pressure head drops) the rate of water extraction drops, until the pumping stops.

Closing Notes:

  1. In the event that you don’t want to use a pressure head specification, a variation of this command exists which controls the flow rate based on total head – nodal flux head constraints

  2. When using nodal flux reduction by pressure head it is important that you inspect the reported flux in the water balance file as it may be less than the maximum value specified.


Aquanty

Calculate fluid volumes for selected elements

This post introduces three new commands added in Revision 2321 (November 2021) for tracking fluid volumes in selected elements within a model. Conceptually, these commands help track fluid volumes in both the porous media and surface flow domains over time by selecting elements of interest in the model. The “by layer” version adds more precision, allowing volumes to be reported for each layer. These commands are valuable for tracking water volumes in both surface and subsurface regions, making it easier to analyze fluid distribution across the model.

Figure 1: Command description for Fluid volume for chosen elements

With the release of Revision 2321 (November 2021) we have introduced three commands:

  • fluid volume for chosen elements

  • fluid volume for chosen elements by layer

  • fluid volume to tecplot

These new commands are similar to other “polygon tracking” commands like fluid mass balance for olf areas using shp file, except with these new commands you can actually compute total fluid volumes, as opposed to volumetric fluxes.

Download the following example project to review an example of these commands:

Abdul_AlgoMesh_Fluid_Volume.zip

Figure 2: Command description for Fluid volume for chosen elements by layer

These commands compute timeseries of the stored volume of water within a set of chosen elements for the porous media domain and the surface flow domain (if present). The ‘by layers’ version of the commands allow you to further quantify these volumes for every layer in the mesh (not only the selected layers). In both cases, fluid volumes are reported for both the surface and subsurface domains.

Application of these commands is quite simple. Just use any method to select a series of elements, invoke the command and provide a name to the volume (which is used to name the output file). The command description from the Reference Manual are included below (note that descriptions are included in the new Reference Manual section 2.12.4.6 – Fluid Volume).

The fluid volume to tecplot command simply exports a Tecplot formatted file for all active fluid volume for chosen elements commands. Please note that the actual fluid volume data is not included in this Tecplot formatted file, but it can be used to visualize the element selections for these commands.

Let’s see the results of these new commands in action. To recreate the images below please download the following example project:

Abdul_AlgoMesh_Fluid_Volume.zip

Figure 3: Command description for Fluid volume to tecplot

Figure 4: Results of fluid volume for chosen elements written to "abdul_algomesho.fluid_volume.volume_1.dat"

In this example project we have included the following code snippet at the very bottom of the *.grok file:

!--------------------------  Fluid Volume for Chosen Elements
clear chosen elements
choose elements am
./mesh/stream_channel_fine.echos
10, 16
Fluid volume for chosen elements
volume_1
Fluid volume for chosen elements by layer
volume_2
Fluid volume to tecplot

Figure 5: Results of fluid volume for chosen elements by layer written to "abdul_algomesho.fluid_volume.volume_2.dat"

In this block of code we have selected the elements in the top 5 layer of the model and delineated by the stream channel. The fluid volume for chosen elements command writes the volumes associated with the surface and subsurface domains within the upper 5 layers to the "abdul_algomesho.fluid volume.volume_1.dat" file (see Figure 4).

The fluid volume for chosen elements by layer writes volumes for ALL model layers delineated by the stream channel to the "abdul_algomesho.fluid volume.volume_2.dat" file (see Figure 5).

These fluid volume output files can be easily loaded into Tecplot to visualize timeseries water volumes (see Figure 6).

Finally, the "abdul_algomesho.fluid_volume_selection.dat" file can be used to visualize the delineation of both zones within Tecplot (see Figure 7).

We know these commands will make tracking water volumes so much simpler! Please let us know if you experience any issues with the new commands, or if you have any ideas on how they can be improved.

Figure 6: Timeseries showing results of fluid volume for chosen elements command

Figure 7: Delineation of zones affected by fluid volume for chosen elements (left) and fluid volume for chosen elements by layer (right)


Brickhouse Guitars

SGI Avenir The Bear Demo

-/-

Andrew Coppolino

“Bastard” saffron

Reading Time: < 1 minute


Saffron, of course, is an absolutely stunningly delicious spice that is derived from the yellowish-orangey stigma of the purple crocus. It’s the spice that turns paella and risotto Milanese a wondrous yellow colour and adds a heady, earthy aroma that drives appetite.

The other stigma of saffron? The price. A little goes a long way, but it takes nearly 15,000 little wee stigmas, intensely picked by hand, to give you an ounce of saffron.

Bastard saffron is safflower flowers that can be a substitute for saffron (if you can find them), which is also known as saffron thistle, picture above.

♦Crocus-stigma saffron (andrewcoppolino.com).

The qualities of bastard saffron don’t make it saffron, of course, but otherwise safflower oil is a polyunsaturated oil excellent for frying because of its high smoke-point; it’s also good for salad dressings because it doesn’t solidify when you store it in the fridge.

[Banner photo/Wikimedia Commons]

Check out my latest post “Bastard” saffron from AndrewCoppolino.com.


Aquanty

Fluid Volume Concentration Threshold

♦ ♦

Figure 1: Module 4B conceptual model

This post describes how to use the fluid volume concentration threshold command, introduced in the January 2022 release (revision 2342), to calculate fluid volumes throughout the model domain that exceed or fall below a user-defined concentration threshold. Conceptually, we can think of this command as a way to separate water into two categories based on solute concentration—those above the threshold (contaminated water) and those below (uncontaminated water). This command is active when the concentration value reaches the specified threshold, removing the need for post-processing in external software like Tecplot.

Figure 2: Command description

The command is called fluid volume concentration threshold and it's designed to calculate fluid volume throughout the model domain which falls above and below a user specified concentration threshold. This command can be extremely helpful in situations where the total volume of contaminated water is required, and removes the need for post-processing using Tecplot or other programs.

A simple example is available, based on Module 4B of the 'Introductory Modules' (the introductory modules are a series of increasingly complex box models which demonstrate contaminant transport through fractured media under steady-state vs transient conditions, homogeneous vs heterogeneous, etc.). The introductory modules are available for download on the Guide for New Users page.

(Click here to download an example problem (Module 4B) which demonstrates the fluid volume concentration threshold command.

Figure 3: Example of fluid volume concentration threshold command in .grok

Please note that this version of the example has been slightly modified to include a much higher specified concentration BC for the 'salt' solute. The specified concentration at the inlet has been set to 35 kg/m^3, i.e. to the approximate salinity of seawater.

First let's review the command itself. As you can see from Figure 2, it's quite simple to implement only requiring you to specify a label for the output file and a concentration threshold for each species.

If you scroll to the bottom of the Module4b .grok file we can see the application of the command. In this case the model calculate the total volume of 'brackish' and 'fresh' water throughout the model domain, based on a 1.0 kg/m^3 (i.e. 1,000 mg/L) threshold.

Figure 4: Resulting output file

Note: you MUST specify a threshold for ALL solutes defined in problem. Threshold values should be defined in the same order that solutes are created/defined in the .grok file. Associated output files will be generated for each solute.

After running phgs.exe we should see the associated file appear in the project folder (module4bo.fluid_volume_conc.threshold_Volume.salt.dat, see Figure 4). This file contains the total volumes of water greater than the specified threshold ("PM > Thresh"), less than or equal to the specified threshold ("PM <= Thresh") and the total volume of water in the porous medium domain ("PM Total").

By the end of the simulation (3.1536E+08 seconds = 10 years) we can see that approximately 25% of the total fluid volume would be considered brackish. Figure 5 illustrates the distribution of this volume using the isosurface feature in Tecplot, and we can see that the solute concentrations are centered around the location of fractures:

We hope you find this new command very useful, please let us know what you think!

Figure 5: Resulting salt concentration distribution (isosurface displayed at conc. = 1.0 kg/m^3)


Aquanty

Restarting a Terminated Run using restart_file_info.dat

This post outlines the improved HydroGeoSphere restart functionality, designed to simplify resuming a model run after unexpected termination. Previously, restarting required modifying multiple input files, rerunning grok.exe, and manually appending outputs. Now, with the automatic generation of parallelindx.dat, restart_file_info.dat, and prefixo.restart, the process is much more efficient.

By updating the __Simulation_Restart value in the parallelindx.dat file, users can seamlessly restart simulations without adjusting model inputs. This approach ensures continuity, maintains model states, and offers flexibility in managing output files. This feature is particularly useful for long or complex simulations where interruptions may occur.

Once again, this ‘command of the week’ post is not going to highlight a particular HGS command but instead presents a bit of an advanced technique. This week’s post is all about restarting a model run that was terminated early for whatever reason. Fortunately, as of June, 2021 (revision 2270) we have overhauled the model restart process to make it much easier to implement!

There are several reasons for an HGS simulation to terminate early:

  • Power failures!

  • A new boundary condition comes into effect which results in a diverging solution;

  • Manual termination to devote some CPU power to other tasks

  • Perhaps you’re working in a supercomputing environment with a fixed maximum run-time policy and your model takes much longer to run (resulting in several restarts).

Among HydroGeoSphere users, the most common method of resuming a simulation is probably to save the available outputs, retrofit the model *.grok file with the Initial head from output file command (for all active domains) and to modify the initial time of the model using the initial time command. The previous head files are then used to initialize the head throughout the subsurface and surface domains, and HGS then calculates the velocity, flux, water saturation, etc. The re-initialized model then becomes identical to the terminated model at the final output time. However, this method requires you to make several adjustments to the model input files, re-run grok.exe, and what’s most unfortunate is that you would then have to spend considerable time updating your output files and concatenating data from multiple model runs.

Figure 1: Example ‘parallelindx.dat’ file. Note that only the final setting (__Simulation_Restart, the “restart index”) is used in the model restart procedure. Restart mode is activated if >1 when phgs.exe is initiated.

The new restart functionality takes care of all these issues for you. To understand how this process works, there are a few "behind the scenes" things that you should understand:

  • When you initiate a model run, phgs.exe will create a file called ‘parallelindx.dat’. This file is used primarily to specify whether a model will be executed in parallel mode, but it does also include a flag (__Simulation_Restart) which indicates whether a model will be run from scratch (i.e., time = 0) or whether it should be restarted from a later time.

  • At every timestep, phgs.exe will update the binary ‘prefixo.restart’ file, which records the latest head (and concentration if transport is active) across all active model domains at the latest timestep.

  • At every timestep, phgs.exe will update the ‘restart_file_info.dat’ file, which records information required to initiate the model restart.

Figure 2: Example ‘restart_file_info.dat’ file. This file is updated automatically and does not require user input, unless the command Restart write off has been included (see Figure 3 below).

Using these three files, HydroGeoSphere is now able to initiate a model restart without requiring any changes to your *.grok file or any inputs (other than parallelindx.dat and restart_file_info.dat files).

Here is a quick overview of how the restart process works:

Figure 3: Reference Manual entry for Restart write off command

  1. You run a model, it terminates prematurely, and you want to restart it from where you left off.

  2. You open the parallelindx.dat file and change the restart index (i.e., __Simulation_Restart) to any integer greater than 1. Save and close the file.

    • When phgs.exe is initialized, it will recognize that a model restart is required.

  3. An optional step is to open 'restart_file_info.dat' and change the __append_to_output_files logical flag.

    • By default, this flag is set to ‘T=true’, which means that all regularly generated output files (e.g., 'prefixo.lst', observation point/well outputs, species mass balance files, boundary condition output files, etc.) will have results appended to the existing file.

    • Setting this flag to ‘F=false’ will create new output files that incorporates the __Simulation_Restart value into their file names. For example, if the restart index is set to 2, the new *.lst file would be named 'prefixo.0002.lst'.

  4. Run phgs.exe again, no further changes are needed!

    • phgs.exe recognizes that a restart is required based on the __Simulation_Restart value within the 'parallelindx.dat' file.

    • phgs.exe will read 'restart_file_info.dat' to determine the latest successful/completed timestep (__initial_time), the timestep size for the next timestep (__initial_timestep), the next timestep target (__ntloop_target), the starting index number for future binary output files (__iphead) and a logical flag indicating whether model output files should be appended or overwritten (__append_to_output_files(F=false,T=true)). If transport is active a starting index number for these binary outputs (__ipconc) will also be included.

    • The 'prefixo.restart' file is used to update the initial heads and concentrations throughout the model, allowing the model to resume seamlessly from where it was terminated.

  5. The model will carry on as though it never failed. Output files will be either appended, or new versions (with the __Simulation_Restart index in the filename) will be created.

In certain situations you may want to disable updating of the restart files ‘prefixo.restart’ and ‘restart_file_info.dat’ during a simulation. You can do so via the command Restart write off (restart files are always updated at the end of a successful simulation regardless of this command). Use this command with caution, however, since if your simulation does not complete successfully, you will be unable to restart it via the restart feature.

Figure 4: The 'prefixo.eco' file can be used to identify the next target time and initial head index #s listed in the 'restart_file_info.dat' file.

Figure 5: Abdul_Transport problems ‘restart_file_info.dat’ file after exactly 26 timesteps are successfully solved.

You can easily test this new restart procedure yourself using any of the readily available verification models. In the images below I have highlighted some of the resulting files after running the abdul_transport verification problem and terminating the model run (to terminate a model run prematurely press CTRL+C in the command line while phgs.exe is running) after successfully running past t=1800 (i.e., 26 timesteps).

Before running phgs.exe we can review the 'abdul_transport.eco' file to review a list of target times (see Figure 4)

After running phgs.exe and terminating the model after exactly 26 timesteps we can see that the 'restart_file_info.dat' file has updated itself (Figure 5 below). When this model is restarted it will be based on a new initial time of 1800 seconds, with an initial timestep of 100, the next target time index is 7 (i.e., t=3000).

The index # for binary outputs are given by the __iphead and __ipconc settings. This ensures that the correct output time index # is applied to the end of binary output files. In this case, since __iphead and __ipconc have a value of 6, the next output time (t=3000) will have binary outputs in the format: 'prefixo.variable_domain.0007' (e.g., 'abdul_transporto.head_pm.0007'). This ensures that all binary output file numbering follow along from the current outputs without missing a beat

Finally, non-binary output files for this model will be appended to existing files as indicated by the ‘__append_to_output_files’ setting (see Figure 5)

To restart this model simply open the ‘parallelindx.dat’ file and change the ‘__Simulation_restart’ setting to anything greater than 1, then save the file and initiate phgs.exe from the command line.

Figure 6: ‘abdul_transo.water_balance.dat’ after model termination/restart/completion.

You will see that all non-binary output files are appended, showing no sign of the model terminating. For example, Figure 6 below shows the resulting ‘abdul_transo.water_balance.dat’ file after successfully running the model after the model restart:

We hope that this new feature helps ease some of the frustrations of unexpected model terminations/failures. There may some model states that are held in 'volatile' memory and would be lost in case of a model crash, but this new restart feature should be valid and appropriate for the vast majority of models. If you do notice any unusual behaviour after a model restart please do let us know. And if you have any questions about this new feature don’t hesitate to ask, we're here to help!


Capacity Canada

Lactanet -Administrateur(trice) externe

♦ Conseil d’administration Administrateur(trice) externe

Vous êtes un(e) leader dynamique et expérimenté(e) qui cherche à façonner l’avenir d’une entreprise avant-gardiste? Nous recherchons activement une personne accomplie qui se joindra à notre conseil d’administration à titre d’administrateur(trice) externe. Dans ce poste, vous jouerez un rôle clé en guidant l’orientation stratégique de notre entreprise. Nous recherchons une personne avec de l’expérience et une expertise diversifiée qui peuvent apporter à notre conseil d’administration de nouvelles perspectives, des idées novatrices et un engagement envers l’excellence.

Responsabilités clés
  • Collaborer avec les collègues membres du conseil d’administration à définir et à mettre en œuvre la vision stratégique de la compagnie.
  • Fournir des perspectives et une expertise judicieuses liées à votre domaine de spécialisation.
  • Participer activement aux assemblées, aux comités et aux séances de planification stratégique du conseil d’administration.
  • Aider à identifier les risques et les occasions potentiels pour l’entreprise.
  • Représenter les intérêts de la compagnie de manière professionnelle et éthique.
Qualifications et expérience
  • Une solide expérience de réussite dans votre domaine
  • Une expérience comme membre de conseils d’administration ou dans des postes de
  • De solides compétences interpersonnelles et de
  • Une passion pour l’industrie laitière et un engagement envers la mission et les valeurs de
  • Une capacité avérée à réfléchir de manière critique et stratégique.
  • Une passion pour l’innovation et la
  • La capacité d’écrire et de parler couramment le français et l’anglais est un
Compétences privilégiées

Une solide base de bonnes pratiques de gouvernance ainsi que des compétences et de l’expérience dans les domaines de la gestion des affaires et des technologies de l’information (particulièrement les mégadonnées et la gouvernance des données) seront prises en considération de façon privilégiée.

Comment soumettre votre candidature

Si cette occasion de vous joindre à Lactanet vous passionne et vous répondez aux exigences du poste, veuillez nous faire parvenir d’ici le 28 février votre curriculum vitae par courriel ayant pour objet « Administrateur externe » à emploi@lactanet.ca.

Lactanet est la principale organisation d’amélioration des troupeaux laitiers responsable du contrôle laitier,

des évaluations génétiques, des connaissances, du transfert du savoir et traçabilité des bovins laitiers. Nous offrons des produits et des services pour aider les agriculteurs canadiens à gérer leur exploitation laitière, aux entreprises de l’industrie engagées dans l’amélioration génétique des races de bovins laitiers et aux entreprises engagées dans la production et la transformation du lait.

 

The post Lactanet -Administrateur(trice) externe appeared first on Capacity Canada.


Aquanty

Reservoir with Spillway

This post describes the new reservoir with spillway boundary condition, an improvement over the basic reservoir BC that provides greater control over water release in HydroGeoSphere models. This boundary condition allows for more realistic reservoir inflow and outflow behavior by incorporating parameters such as surcharge storage, spillway discharge, and gate-controlled release. Conceptually, the reservoir with spillway BC enhances water management simulations by dynamically adjusting outflows based on reservoir storage conditions rather than relying solely on predefined time-value tables. We find this boundary condition particularly useful for modeling hydraulic control features in complex water management scenarios.

The new reservoir with spillway boundary condition is an improvement to the existing reservoir BC that provides you with much more control over the release of water from the reservoirs that you have incorporated into your HydroGeoSphere models.

Download the following example project to review an example of the reservoir with spillway BC, based on the always popular Abdul verification problem:

Figure 1: Conceptual Model of the Basic Reservoir Boundary Condition

Click here to download example model Abdul_Reservoir_Spillway.zip

Note: you will need to run the model to produce results after download.

Figure 2: Conceptual Model of the New Reservoir with Spillway Boundary Condition

To properly introduce the new reservoir with spillway BC, it makes sense to first review the existing reservoir BC. The reservoir boundary condition facilitates the simulation of surface water management schedules that aim to remove water from instream flow nodes (overland flow and/or 1-D channel flow domains), store water, and release water back into stream flow. While water removal from instream flow is dependent upon water being present in the requisite domain, the storage is managed as an offline numerical reservoir, hence volumetric storage is independent of mesh discretization and topographic resolution. Reservoir storage can be characterized with the optional BC constraints/commands initial reservoir storage (initial volume of water stored in the reservoir at the beginning of the simulation), base reservoir storage (the base or ‘dead’ storage volume of the reservoir) and maximum reservoir storage (the maximum volume of water which can be stored in the reservoir). Figure 1 illustrates the conceptual model of the basic reservoir BC, where inflow/outflow rates are predefined using a time-value table (subject to available water volumes):

The reservoir with spillway BC expands on the conceptual model above by introducing a surcharge reservoir storage parameter, spillway parameters that define the spillway discharge rate from the surcharge storage area, and the gate discharge table that controls discharge from the maximum storage area (this takes the place of a standard time-value table in the basic reservoir BC). Furthermore, inflow to the reservoir can now be more realistically defined using the Inflow hydrograph name command (see section 2.7.3.10 of the Reference Manual for full details on all of these commands!). Figure 2 illustrates the conceptual model of the new reservoir with spillway boundary condition.

The reservoir with spillway BC differs from the basic reservoir BC in the following key ways:

  • Inflow to the reservoir with spillway is controlled by an inflow hydrograph (using inflow hydrograph name). This inflow hydrograph must be defined with the command Set hydrograph nodes prior to defining the inflow hydrograph for the reservoir. We recommend that the hydrograph node set contains the node at which the reservoir is defined.

    • With the basic reservoir BC inflow is simply controlled by a standard time value table

  • Outflow from the reservoir with spillway is handled using both pre-defined volumes (i.e. gate discharge) and calculated fluxes (i.e. spillway discharge). Additional overflow discharge is activated if the amount of inflowing water exceeds the storage capacity of the reservoir.

    • With the basic reservoir BC outflow is simply controlled by a standard time value table

Application of the reservoir with spillway boundary condition follows the usual workflow, illustrated in the *.grok snippet below:

use domain type
channel

clear chosen nodes
Choose node number
21859
create node set
dam1
clear chosen nodes

boundary condition
	type
	   reservoir with spillway
	name
	   Dam_1
	node set
    	   dam1
	inflow hydrograph name
    	   hydro
	initial reservoir storage
    	   120
	base reservoir storage
    	   20
	maximum reservoir storage
    	   100
	surcharge reservoir storage
    	   200
	spillway parameters
    	   2.0
    	   3.0  !width
    	   ! volume to elevation table
    	      0	    2.5
    	      50    2.7
    	      100   2.8	
    	      150   2.85
    	      200   2.87
    	      400   3.0
	   end
	gate discharge table
    	   0	0.02
	end
	nodal flux reduction by pressure head
    	   0.01
    	   0.1
end

Figure 3: Results of the Reservoir with Spillway Boundary Conditions

By applying hydrographs upstream, downstream and at the location of the reservoir itself we get a picture of the spillway operation (see the Dam_check.lay layout package for Tecplot, included in the project folder above). Figure 3 shows high initial inflow rates and increasing storage in the reservoir until t=3000 seconds when the rain flux terminates. Flow out of the reservoir is initially driven by the gate discharge table (0.02 m3/s), but quickly increases in response to increased storage in the reservoir (i.e. spillway discharge kicks in). Discharge from the reservoir reaches it’s peak at t=3000, but declines rapidly back to the rate specified in the gate discharge rate at t=4200. By t=9300 storage has dropped to the base reservoir storage of 20m3, and discharge from the reservoir drops to near 0.

The new reservoir with spillway boundary condition provides you with a flexible tool to simulate hydraulic control features in your HydroGeoSphere models! Please let the Aquanty team know if you have any questions!

Notes:

  • The reservoir with spillway BC also supports the nodal flux reduction by pressure head command, which may be used to limit inflow to the reservoir. We recommend this option in practice, to avoid removing too much water from the system, which can lead to model instability.

  • The reservoir with spillway BC automatically writes timeseries info to the prefixo.reservoir spillway.bcname.dat file. This happens automatically and provides more detailed information that the typical Tecplot output command.


Capacity Canada

Lactanet – External Director

♦ Board of Director External Director

You are a dynamic and experienced leader looking to shape the future of a forward-thinking organization? We are actively seeking an accomplished individual to join our Board of Directors as an External Director. In this role, you will play a pivotal role in guiding the strategic direction of our organization. We are seeking an individual with diverse backgrounds and expertise who can bring fresh perspectives, innovative ideas, and a commitment to excellence to our Board.

Key Responsibilities
  • Collaborate with fellow board members to define and implement the company’s strategic
  • Provide valuable insights and expertise related to your field of
  • Participate actively in board meetings, committees, and strategic planning
  • Assist in identifying potential risks and opportunities for the
  • Represent the company’s interests in a professional and ethical
Qualifications & Experience
  • A proven track record of success in your respective
  • Experience serving on boards or in leadership
  • Strong interpersonal and communication
  • A passion for the dairy industry and a commitment to the company’s mission and
  • Demonstrated ability to think critically and
  • A passion for innovation and
  • The ability to write and speak fluently in both French and English is an asset
Preferred Skills

Candidates with a strong foundation of good governance practices and skills/experience in the areas of information technology (specifically big data and data governance) as well as strategic planning will be an asset.

How to Apply

If you are passionate about this opportunity at Lactanet and meet the position qualifications, please send us your resume by February 28th with an email subject line “External Director” to careers@lactanet.ca.

 

 

The post Lactanet – External Director appeared first on Capacity Canada.


Aquanty

Visualizing Models Components Using to Tecplot Commands

This post describes how to use Tecplot export commands to visualize model properties and structures without running a full simulation. These commands allow users to quickly inspect their model setup by generating Tecplot-formatted files directly from grok.exe. These commands are particularly useful for reviewing mesh structures, material properties, and domain features like fractures and channels before committing to a full model run. We find these tools extremely helpful for catching errors early and streamlining the model-building process.

Figure 1: Tecplot visualization of various to tecplot command outputs.

A recent request from a user resulted in the new command channels to tecplot (introduced in revision 2372, March 2022), so we thought it would be a good time to review all the similar commands that write model information to Tecplot formatted data files.

If you would like to review the commands presented here and reproduce the images below you can download the example project ‘Abdul_to_Tecplot’.

Click here to download 'Abdul_to_Tecplot' example project.

In this example we have made the following changes compared to the typical ‘Abdul’ verification problem:

  1. Two porous medium property zones are included, with different hydraulic conductivity and porosity values

  2. The fracture domain is included (location of fractures is arbitrary)

  3. The channel domain is included (location of channels is arbitrary)

  4. Evapotranspiration is active, with two ET zones.

To easily review these changes, we can use the following commands to generate Tecplot formatted output files when grok.exe is run. That means you don’t have to run the entire model through phgs.exe in order to generate and visualize the results specific to the changes above.

  • mesh to tecplot exports the entire 3D mesh to a Tecplot formatted ASCII file (mesh_Tecplot.dat).

  • K to tecplot writes elemental hydraulic conductivity information to a Tecplot formatted ASCII file (mesh_K_Tecplot.dat)

  • porosity to tecplot writes elemental porosity information to a Tecplot formatted ASCII file (mesh_porosity_Tecplot.dat)

  • ET zones to tecplot writes evapotranspiration zones to a Tecplot formatted ASCII file (mesh_ET_Tecplot.dat)

  • channels to tecplot writes the channel mesh to a Tecplot formatted ASCII file (mesh_channels_tecplot.dat)

  • fractures to tecplot writes the fracture mesh to a Tecplot formatted ASCII file (mesh_channels_tecplot.dat)

Applying the commands above is really simple, only requiring you to write the command and provide a filename (with file type *.DAT) for the Tecplot file (a portion of the ‘abdul_to_tecplot’ grok file is reproduced below):

!------------------------------------- Generate Tecplot Output
mesh to tecplot
mesh_Tecplot.dat

K to tecplot
mesh_K_Tecplot.dat

porosity to tecplot
mesh_Porosity_Tecplot.dat

ET zones to tecplot
mesh_ET_Tecplot.dat

Channels to tecplot
mesh_channels_tecplot.dat

fractures to tecplot
mesh_fractures_Tecplot.dat

You only need to run grok.exe to generate the output files which allow you to visualize the distribution of the fractures, channels and properties throughout your model domain. This is really helpful for constructing your models, since you don’t need to run through a lengthy simulation only to realize you assigned your channels/fractures to the wrong faces/segments! It’s also a very handy way of reviewing your 3D mesh before committing to a full model run.

For any users who weren’t aware that these commands existed, I really hope they make the model building experience just a little bit easier!


Aquanty

Particle Tracing

This post describes how to use particle tracing in HydroGeoSphere, introduced with the release of HGS revision 2385 (April 2022). This new capability allows users to track the movement of massless particles through the subsurface domain, providing valuable insights into flow patterns and transport processes. Conceptually, particle tracing helps visualize how water moves through the system, making it a powerful tool for analyzing groundwater flow and solute transport. We find this feature particularly useful for understanding complex flow dynamics.

We’re really pleased to announce the introduction of particle tracing capabilities with the release of HGS revision 2385 (April 2022 release). In this post we’ll explore some of the new commands associated with this new feature and see how you can incorporate particle tracing into your own models.

Note: particle tracing is based on advective transport of a massless particle through the subsurface domain. The particle tracing implementation here does not account for retardation, dispersion or diffusion processes, and is not a suitable alternative to a full ADRE solute transport model.

Of course, to use particle tracing you will need to install the April 2022 version (or newer) to your computer. Visit our ‘Download Page’ (www.aquanty.com/hgs-download) to access the installers.

Figure 1: In-element tracing of a particle with known starting position and elemental velocity field.

If you would like to review the commands presented here and reproduce the images below you can download the example project ‘R5_particles’. This example is based on the ‘Intro to HGS’ tutorial that we review in our monthly training sessions, with some modifications.

  • Hydraulic conductivities have been increased to promote faster travel times

  • Pumping rates have been increased, and no longer cycle on/off (they just stay on)

  • Evapotranspiration has been deactivated (to reduce model runtime)

  • Overall simulation duration has been increased, and fewer output times are included

  • Some modifications to timestep controls and numerical solution criteria

And of course, we have included all the necessary particle tracing commands at the end of the *.grok file for you to review and adapt to your own models.

Click here to download example model: r5_particles.zip

Figure 2: Determining the particle exit point

Before we jump into the practical aspects of particle tracing with HydroGeoSphere, let’s briefly explore the way it has been implemented. Particle tracing tracks the position of an idealized massless particle as it moves through the subsurface domain following the flow field until it either exits the model via a boundary condition or exits to the surface. Each particle is released from an initial location within the subsurface domain at an initial release time. The initial location of particles and the initial release time are both user-specified.

The path that a particle travels within any given element is based on a linear interpolation of the velocity field at the given element (Figure 1). The travel time to intersection with each face associated with the element is then calculated, and the exit point of the particle is determined on the intersection point with the shortest travel time (Figure 2).

Figure 3: Particle reflection and the exit criterion

Particles may ‘reflect’ within an element if a certain exit criterion is not satisfied (Figure 3). At a potential exit point the velocity vector of the target element is multiplied by a unit vector perpendicular to the shared face, If the result is not positive then the particle will be reflected within the original element. The particle will then follow the direction of the velocity vector of the neighboring element, until it encounters another face. The total travel time of the particle within the original element is equal to the sum of travel times until the exit criterion is satisfied (Figure 4). Sometimes an element has no faces which meet the exit criterion, in which case the element is classified as a ‘dead-end element’. In these cases the faces of the dead-end element are treated as reflection walls, and particles will not travel into them (Figure 5).

Figure 4: Particle travel time

Particles will travel throughout the model domain until they encounter boundary nodes with negative fluxes, which are termed ‘sink nodes’. If all nodes associated with a particular face are classified as sink nodes, then a particle may exit the model at any point in the face (Figure 6). If only one or two nodes in the exit face are classified as sink nodes then the particle may exit through the nearest sink node or segment (Figure 7).

Now that we have a basic understanding of how particle tracing works in HydroGeoSphere, let’s take a closer look at how apply this feature in a model. We have introduced nine new commands that can be used for particle tracing (see section 2.11 of the Reference Manual for more info). Here is a brief description of each new command:

Figure 5: Dead-end elements

Figure 6: Particle exiting elements via faces

  • trace particle causes HGS to activate particle tracing.

  • trace particle logging causes HGS to write detailed information for each particle to an ASCII file.

  • initial particle location file causes HGS to specify the initial location, group ID and release time for particles using an ASCII input file.

  • initial particle location by layer from file causes HGS to specify the initial location by layer, group ID and release time for particles using an ASCII input file. Particles will be placed at the vertical mid-point of the assigned layer.

  • output times for particle locations causes HGS to record particle locations at specified times.

  • maximum trace time causes HGS to specify the maximum time at which particle traces are updated. Tracing effectively stops after this time.

  • maximum trace count causes HGS to limit the maximum number of locations allowed in a particle trace path (can be used to limit memory consumption).

  • maximum trace output causes HGS to limit the maximum number of particles to record in the particle trace and particle location output files.

  • maximum particle reflection count causes HGS to limit the number of particle reflections that are permitted over a trace step when updating a particle's location. This command allows HGS to deactivate particles that are unable to move from one element to a neighboring element.

Figure 7: Particle exiting elements via nodes and segments

Eight of these new commands are demonstrated in the associated ‘R5_particles’ example model (download link above). If you scroll to the bottom of the *.grok file you will see the following commands (the inline comments from the *.grok file have been removed for clarity):

trace particle     
Trace particle logging      
Initial particle location by layer from file    
R5_5m_pts2.txt                           
 !Note: a similar command 'initial particle location from file' can be used, except instead of a layer # you would provide a Z-coordinate. 
Output times for particle locations     
    86400        !24 hours
    2.592e+6    !30 days
    3.156e+7     !1 year
end
maximum trace time      
3.156e+8               
Maximum trace count     
50000                    
!Maximum trace output   
!1000                  
maximum particle reflection count  
50   

Figure 8: Initial particle distribution

In this model we have seeded the first layer of the model domain with particles at 5m spacings (Figure 8).

Running this model will generate the following "extra" output files (i.e. extra with respect to a project without particle tracing):

  • R5_particleso.particle_travel_time.csv

  • R5_particleso.particle_location.dat

  • R5_particleso.particle_trace.0001.dat

  • R5_particleso.particle_trace.0002.dat

  • R5_particleso.particle_trace.0003.dat

Let’s take a closer look at these files and the data contained in them.

First, the ‘particle_travel_time’ output file records the status, exit type, exit name, travel time [T], and travel length [L] of each particle (Figure 9). There are six possible ‘status’ values for individual particles, including:

Figure 9: The ‘particle_travel_time’ output file

  • Moving (Status =0) – the particle is moving through the model domain.

  • Normal exit (Status =1) – the particle has exited the model domain via either a boundary condition or to the surface.

  • Unreleased (Status =2) – the particle has not been released yet (release time is user defined)

  • Max trace time (Status =3) – the maximum trace time for the particle was reached (as defined by the maximum trace time command).

  • Max trace count (Status =4) – the maximum trace count for the particle was reached (as defined by the maximum trace count command).

  • Abnormal exit (Status =5) – this status is triggered for a particle when the maximum reflection count is repeatedly exceeded (as defined by the ‘maximum particle reflection’ command). In this case the particle is not able to make any progress!

  • Bad intersection (Status =6) – this status indicates that an error has occurred when computing the intersection point between the particle’s trajectory and the face of the element that currently contains the particle. This status is not common and indicates a breakdown in the particle tracing numerics (and should be further investigated!).

In our example, the ‘R5_particleso.particle_travel_time.dat` file should indicate that all particles are either still moving, or they have encountered a normal exit. For each particle with a normal exit we can also take a closer look at the “Exit Type” (i.e. the type of boundary condition through which the particle exited, for example “Flux Nodal”) and the “Exit Name” (i.e. the user specified or HGS default name of the individual boundary condition). The vast majority of particles in this example have exited through the flux nodal boundary condition, representing the pumping wells throughout our model domain (i.e. 'Fnodal_5').

Figure 10: The ‘particle location’ output file

Next, the ‘particle_location’ output file is an ASCII Tecplot file that records the location (X/Y/Z coordinates), group ID, and status of each particle through time (Figure 10). The locations/status of particles are recorded for all output times specified in the output times for particle locations command. The status listed here corresponds to the statuses mentioned above for the ‘particle travel time’ output files.

Finally, a ‘particle_trace’ file is generated for each of the specified particle output times (as defined by the output times for particle locations command). These are ASCII Tecplot files that record the trace path for each particle from its initial release time up to the current trace output time. We can use these files to visualize particle trace pathlines and initial particle locations in Tecplot.

To display particle traces in Tecplot we recommend using the mesh to tecplot command to generate a simple Tecplot formatted output file for your model mesh (i.e. the “R5mesh.dat” file generated by grok.exe in this example problem). You can load the model mesh file into Tecplot easily enough (drag and drop) and then use the ‘Load Data’ workflow to import any of the “R5_particleso.particle_trace.XXXX.dat” files.

Note: Make sure to “append data” instead of overwriting the R5mesh.dat file when importing particle traces.

Figure 11: Particle trace path lines for the ‘R5_particles’ model after 30 days (black lines indicate particle traces; red spheres indicate the location of pumping wells (Fnodal_5))

Once the particle trace data has been imported you can activate the ‘edge’ data layer to display particle traces, and the ‘scatter’ data layer to display particle locations recorded at each timestep. Figure 11 illustrates particle traces after 30 days (i.e. the ‘R5_particleso.particle_trace.0002.dat’ output file).

Notes on visualization:

  • It is also possible to show particle traces in the context of hsplot.exe output files (e.g. the prefixo.pm.dat file). In this case output times for the particle tracing feature should align with the overall output times for the flow simulation. Furthermore, it is necessary to delete the first zone in the hsplot output file to display the particle traces alongside the prefixo.pm.dat file.

    • First load the prefixo.pm.dat file.

    • Next delete the first zone by clicking: Data -> Delete -> zone- > 1:pm

    • Then load the particle trace file as described above.

  • Each individual particle/trace can be assigned it’s own unique visualization settings. However, if you have some sense beforehand where different groups of particles will ‘exit’ the model then you are encouraged to classify these into different particle groups (i.e. using the GroupID assigned in the initial particle location from file command).

  • You can also use ‘Value blanking’ in Tecplot to turn on/off the visualization of particle traces based on the Group ID.

Adding Particle Tracing to an Existing Model:

  • Many users will be interested in adapting existing models to include particle tracing. In this case please note that the particle tracing feature can be used in conjunction with the defined flow command. With this command you can use the existing flow results from your model, avoiding the need to run the full flow simulation again.

We hope you like this new feature! As always, if you have any comments or recommendations on how this feature (or any other) can be improved feel free to reach out to the Aquanty team by emailing support@aquanty.com.


Aquanty

'use tabulated unsaturated functions'

Note: this post claims that using the command use tabulated unsaturated functions will result in improved model runtimes. While this is usually true, in some cases it may actually slow down model runtimes. When using unsaturated tables it is necessary for the model to do a lookup followed by linear interpolation to compute the saturation/relative permeability values. If the table is fine enough, then the lookup and linear interpolation may in fact take longer than evaluating the functions directly. The commands table smoothness factor, table minimum pressure, and table maximum s-k slope control the overall 'fineness' of the table (i.e., the number of entries in each table).

This post describes how to use the use tabulated unsaturated functions command, introduced in the May 2022 release (Revision 2397) of HydroGeoSphere, to streamline the implementation of tabular constitutive relationships for unsaturated flow. By automating the process of generating and applying unsaturated tables, this command reduces manual steps and can improve model runtimes. However, in some cases, using unsaturated tables may introduce additional computational overhead. We find this command particularly useful for users working with van Genuchten or Brooks-Corey functions who want to optimize performance while maintaining accuracy.

In the May 2022 (revision 2397) release of HydroGeoSphere we introduced a new command called use tabulated unsaturated functions that should reduce model runtimes for those of you who use the unsaturated flow functions (van Genuchten and/or Brooks-Corey), and should help those of you who prefer to use unsaturated tables instead.

First a bit of background… In a HydroGeoSphere model water flow in the unsaturated zone is governed by the three-dimensional modified form of the Richards’ equation (see equation 2.1 in the Theory Manual). The primary variable being solved for in this equation is pressure head, and therefore a constitutive relationship must be established that relate the primary unknown (pressure head) to the secondary variables Sw (saturation) and kr (relative permeability). In HydroGeoSphere these constitutive relationships are established with common functions including van Genuchten [1] and Brooks and Corey [2].

Now the van Genuchten and Brooks-Corey functions can be fully parameterized and utilized to inform unsaturated flow throughout the model domain. This is done using the Unsaturated brooks-corey functions...End or Unsaturated van genuchten functions...End command blocks, and the associated parameters (e.g. the alpha and beta coefficient, air entry pressure, minimum relative permeability, residual saturation, etc.). If the functions are fully defined, then we would say that flow in the unsaturated zone is controlled using functional constitutive relationships (see section 2.8.3.4 of the Reference Manual).

Figure 1: Command Description

Alternatively, these constitutive relationships can be described using simple tables that describe the relationship between pressure-saturation and saturation-relative permeability. If the constitutive relationships are described using these tables, then we would say that flow in the unsaturated zone is controlled using tabular constitutive relationships (see section 2.8.3.5 of the Reference Manual).

Figure 2: Command Added to 'smith.mprops' File

Now, using functional constitutive relationships is typically more computationally intensive than using tabular constitutive relationships. Therefore, we have long supported a command (generate unsaturated tables) that would create the pressure-saturation and saturation-relative permeability tables based on user-defined functional constitutive relationships. In other words, a user would fully define the van Genuchten or Brooks-Corey parameters and the resulting pressure-saturation and saturation-relative permeability tables would be written to a file. The user could then swap the tabular constitutive relationship in to their material property files to effectively replace the functional constitutive relationships, thereby improving model runtimes. This entire process is described in an earlier ‘Command of the Week’ post (Speeding up HGS models using “unsaturated tables”)

The introduction of the new command use tabulated unsaturated functions significantly streamlines this process. Prior to May 2022 a user would have to manually copy the tabular relationships into their material property files, then the user would have to run grok.exe again before running their simulation.

The new command simply automates those steps. As you can see from the command description (Figure 1), “[this command] is similar to the command Generate tables from unsaturated functions, with the added benefit that it allows grok to be run only once and the generated tables will be used. There is no need to copy the generated tables into their respective property files followed by running grok again.”

To demonstrate the new command, you can download a modified version of the “smith-woolhiser” verification problem:

Download: Smith-woolhiser-use-tabulated-unsaturated-functions

Figure 3: Simulation Runtime Improvements

In the porous media material property file (smith.mprops) we can see that the new command has been embedded into each of the Unsaturated van genuchten functions...End command blocks (see Figure 2 ). That’s all you need to do to implement this command!

If we run this model the tabulated unsaturated functions are automatically subbed into the simulation, which should result in improved model runtimes. Figure 3 illustrates the runtime improvements of this modified version of “smith-woolhiser” compared to the regular version of this problem (found in the ‘verification’ folder, within the HGS installation directory).

As you can see, the new command is really simple to implement, and should result in improved runtimes for anyone using functional constitutive relationships. Let us know what you think!

References
[1] van Genuchten, M. T. (1980). A closed-form equation equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J., 44:892–898.
[2] Brooks, R. J. and Corey, A. T. (1964). Hydraulic properties of porous media. Technical Report Hydrology Papers 3, Colorado State University, Fort Collins, CO.


Code Like a Girl

All you need to know about Windsurf AI IDE

Windsurf AI’s new IDE is a groundbreaking technology created by Codium, aimed at improving the coding experience with advanced AI…

Continue reading on Code Like A Girl »


Aquanty

Assign Multiple Observation Points Based on Depth from Ground Surface

The June 2022 release of HydroGeoSphere (Revision 2409) introduced a powerful yet simple set of commands to streamline the definition of observation points in your models. With six new commands, you can now assign observation points based on absolute elevations or depth from ground surface, reducing manual input and leaving less room for errors. These new options make it easier to work with field data, ensuring observation points align precisely with real-world measurements. To illustrate their utility, we’ve included an example model demonstrating how to efficiently place multiple observation points relative to ground surface elevation.

Figure 1: “Make observation Point by depth” example input

We introduced a very simple (but very useful) set of commands in the June 2022 release (revision 2409) that should really help you define observation points throughout your model (thanks as always to the user community for the great suggestion!).

In fact, we’ve introduced six new commands that allow you to assign one or more observation points (or interpolated observation points) based on absolute elevations, or based on a depth from ground surface. These new commands include:

  • make observation point by depth assigns an observation point based on depth from ground surface

  • make observation points assigns multiple observation points using the ‘usual’ absolute elevation input

  • make observation points by depth assigns multiple observation points based on depth from ground surface

  • make interpolated observation point by depth same as above, except it’s an interpolated observation point

  • make interpolated observation points same as above, except it’s an interpolated observation point

  • make interpolated observation points by depth same as above, except it’s an interpolated observation point

Figure 2: Resulting distribution of observation points using “Make observation Point by depth”

To illustrate the utility of these commands I have included a very simple example model, based on a Abdul test case ("abdul_multiple_obs_by_depth").

Click here to download the example project.

In this example project we’ve used the make observation points by depth command to quickly assign eight observation points along the center of the Abdul stream channel, exactly 0.5m below the ground surface (see Figure 1). As you can see from Figure 2, assigning these observation points based on absolute elevations could be slightly difficult, since the exact elevations might not be known beforehand! Sometimes in the field you may simply record the depth of the observation point based on the ground surface, so this command should help you work with the data you already have. And as an added bonus you save yourself the trouble of copy/pasting the same command repeatedly.

After running the model we can load the observation points and see how easy it was to place them exactly 0.5m below ground surface:

We hope this saves you some time! As always, let the development team know if you have suggestions for other improvements.


Elmira Advocate

OUR AUTHORITIES HAVE MORPHED OVER DECADES INTO PROTECTORS AND INSULATORS OF BAD CORPORATE BEHAVIOR

 Citizens are basically viewed as widgets of varying status whose purpose is to be compliant little worker bees. We are to work hard for an employer, follow all his/her rules as well as allegedly society's rules (laws) and maybe vote every several years to give the appearance of democracy.

When a new corporate owner takes over ownership of contaminated sites it is the Ontario Ministry of Environment who call the shots. Unless of course as with Phillips taking over Varnicolor Chemical, the new owner finds the requirements too onerous. Then they tell the MOE/MECP what they will agree to and the MOE hops to it and makes the changes the prospective new owner wants. Never are the local citizens who likely exposed the environmental violations in the first place invited to the table. Oh no it's the guilty as sin MOE/MECP who sit down and sell the farm. This has occurred here in Elmira with Uniroyal and Varnicolor and in Breslau with Safety-Kleen when they bought out Breslube. The long suffering citizens are given meaningless verbal assurances, never serious in writing and enforceable commitments.

Each new prospective owner of Uniroyal Chemical from Crompton, Chemtura and Lanxess have assured themselves prior to signing on the dotted line that they will only have to dress up, speak softly and  employ credentialed consultants to sell whatever the flavour of the bull of the day is to unsuspecting citizens. Most of the honest citizens who held the corporate polluter accountable decades ago are either deceased, senile, or just plain way past their prime. The current crop especially the handpicked crop by Woolwich Township were picked only partially for their technical proficiency. When the entire cleanup is a scam then what is most needed are quiet, deferential citizens who know their places. and do not rock the boat. 



Child Witness Centre

2025 Pancake Lunch

The 27th annual edition of our signature event will take place on March 4, 2025. This will be a wonderful opportunity to come together on Shrove Tuesday in support of of local young survivors, and their families!

The post 2025 Pancake Lunch first appeared on Child Witness Centre.


Child Witness Centre

A Special Opportunity: Feb 2025

Now is a uniquely special time to make a significant impact for kids and their families in critical need of support, while benefitting on your taxes!

2024 Tax Deadline Extended for Donations

You may have heard the Federal Government has extended the 2024 deadline for charitable donations due to the mail delivery service disruption in late 2024. This means if you donate by February 28, 2025, you can choose to claim that tax receipt on either your 2024 or 2025 income taxes. We encourage you to take advantage of this opportunity by giving a donation today. For our monthly donor heroes, if you would like an interim tax receipt generated for this reason (rather than waiting until year-end), please email admin@childwitness.com.

Your $200 Rebate Can Change Lives

In recent weeks, many Ontario residents have received a $200 tax rebate from the provincial government to help alleviate the higher cost of living. Did you know there’s an initiative inspiring recipients to donate this amount to charity? If you're able to give, would you consider donating some or all of your rebate to support local young victims? On average, it costs $50 for a child and their caregivers to have a session with our team. Every single one of those meetings can dramatically change a life, and set them on a positive trajectory.

Help the Most Vulnerable When Needed Most

We're striving to provide every young victim with the immediate support they deserve, by wiping out our waitlist. Thanks to your generosity, this list now sits at 45 kids, well down from its peak of 204 in 2023. Your generosity will help to remove this barrier entirely? Your options include one-time/monthly gifts, multi-year pledges, and giving to our endowment fund. We’d also love for you to support our Pancake Lunch or to host your own fundraiser!

Thank you for your amazing support, especially in light of the many challenges our community is facing! Together, we’re making a real and lasting impact.

Warmest regards,
Robin Heald | Executive Director

The post A Special Opportunity: Feb 2025 first appeared on Child Witness Centre.


Code Like a Girl

Why the Best Ideas Come from Listening, Not Talking

Some of the best ideas don’t come from structured meetings or intentional brainstorming. They slip through the cracks of casual conversations, linger in the pauses between words, and emerge from the silent undercurrents of everyday life.

We’re taught to listen to what is being said. But real learning — deep, intuitive understanding — comes from tuning into what isn’t being said. The glance exchanged between colleagues when an idea is floated in a meeting. The slight hesitation before someone agrees to a plan. The way people shift in their seats when a certain topic arises. These are the whispers of unspoken truth, the hidden layers of meaning waiting to be uncovered.

♦Photo by CoWomen on UnsplashDeep Listening as a Skill

Pay attention to the rhythm of human interaction. The cadence of a conversation in a café, the fragmented thoughts exchanged in an elevator, the unfinished sentence trailing off as someone hesitates to share their opinion — all of these hold valuable insights if we learn to listen differently.

Instead of just passively hearing words, ask yourself:

  • What’s the underlying emotion here?
  • Where is the hesitation, the excitement, the fear?
  • What’s being avoided, glossed over, or left unsaid?

When we practice this kind of deep listening, we start picking up on patterns. We see the cultural shifts before they become trends. We sense the doubts that haven’t yet been vocalized. We hear the future forming in the present moment.

Brainstorming in the Gaps

Traditional brainstorming assumes great ideas happen in structured spaces — conference rooms, whiteboards, and scheduled sessions. But what if the best ideas come from the gaps between those moments?

Imagine the last time you overheard something that stuck with you — a phrase from a stranger, a question someone asked in passing. These snippets, when revisited, often hold the seeds of something bigger. True creativity isn’t about forcing ideas into existence; it’s about leaning into the quiet moments, the unintended revelations, and the thoughts that hover just outside our direct awareness.

Try this:

  1. Keep a notebook (or a notes app) for those unfinished fragments of conversations.
  2. Revisit them later and ask yourself: What’s the bigger idea hidden within?
  3. Let these ideas cross-pollinate with your own experiences and projects.

You’ll be surprised how often a casual remark or a seemingly random thought can evolve into something powerful when given space to grow.

Leaning In

Leaning in isn’t just about active participation — it’s about intentional presence. It’s the difference between merely attending a meeting and observing its dynamics. Between reading a book and noticing what lingers with you long after you’ve turned the last page. Between hearing words and understanding the emotions behind them.

The world speaks to us in whispers, in hints, in subtle nudges.

♦Photo by Krystian Tambur on Unsplash
The question is: Are we paying attention?

Next time you find yourself in a crowded room, a quiet café, or even scrolling through social media, pause. Notice what stands out. Lean in — not just to the loudest voices, but to the quiet spaces in between. Because sometimes, the best ideas are waiting just outside the edge of conscious thought, hoping you’ll listen closely enough to bring them to life.

Why the Best Ideas Come from Listening, Not Talking was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


James Bow

The Premier Who'd Rather Dig a Hole Beneath a Highway

♦The photograph on the right of Doug Ford was taken in 2014 by Eunice Kim and is used in accordance with their Creative Commons License.

So, we're having an election this month in Ontario. Premier Doug Ford called it on January 28, and we go to the polls on February 27.

Obstensibly, Doug Ford called this election to act like Captain Canada and fight back against anticipated punitive tariffs leveled by a demented American president intent on annexing us. I must admit, I quite enjoyed seeing the conservative Ford talking smack about Trump and Musk. I applauded his punches, such as pulling American alcohol off of Ontario's shelves and cancelling his Starlink contract for satellite internet service across the province.

But then Trump blinked. Or, rather, followed through on his pattern of talking big, then backing down and claiming victory. His tariffs are supposedly on hold for 30 days, obstensibly in response to Canadian moves along the border that Trudeau largely agreed to when Biden was president. Ford has been left high and dry with no real issues to campaign on, save for a cockamamie plan to build a road tunnel beneath Highway 401 from Mississauga to Pickering. He even backed out of his plan to cancel the Starlink contract -- though, to be fair, he could argue that his move had its intended effect of stopping the tariffs, and cancelling signed contracts is expensive. Still, I would contrast Ford's response to that of Quebec, which is continuing to pursue its cancellation of its Starlink contract and is working with the federal government to set up a Canadian-made satellite internet system, but I digress.

You see, Ford now has a problem: the one excuse that made his early election call make any sense has been removed. And if people were paying attention, this wasn't actually his real reason for going to the polls early. Ford's been signalling his desire to hold an early election for months, now, long before Trump retook the White House.

On June 2, 2022, Doug Ford and his Conservatives won a four-year majority mandate in Queen's Park with around 40.5% of the vote. By tradition and, I believe, by attempts to legislate a fairer electoral process, that mandate lasts until June 2026. Ford's Conservatives continue to hold the majority of seats at Queen's Park and there are no serious challenges to Ford's leadership. Even when he had Trump's tariffs as an excuse to call a snap election, the moves he took during the campaign were things that he could have done without holding an expensive and unnecessary early election. So, why do we need to spend $189 million to go to the polls now in the dead of winter rather than in June 2026?

The only reason that makes sense is that Ford fears the result he'll face if he waits until the election date that tradition dictates.

Months ago, when Ford started considering calling an early election, he was looking at a situation in Ottawa where Trudeau seemed likely to lose big to the federal Conservative Party under Pierre Pollievre. History suggests that Ontario voters tend to hedge their bets when it comes to their elections. When a Conservative government sits in Ottawa, Ontario voters tend to vote Liberals into Queen's Park, and vice versa.

In summary: Ford is calling an election now because he thinks he's going to lose in 2026. So my question is, why should I vote for a premier who feels that he's already lost?

Dare Ford run on his record? When he was elected, he campaigned about stopping hallway healthcare in our hospitals and eliminating a persistent deficit run by the previous Liberal government. After seven years of Ford being in charge, the deficit is roughly the same, and our overstressed emergency rooms continue to be overstressed. The solution there is simple: invest more money in emergency room staff, nurses and doctors, but Ford would rather waste billions of dollars digging a hole beneath a highway.

As our cost of living increases and rents go through the roof, rather than spend money building decent and affordable housing where people need it, he tries to sell off flood-prone Greenbelt land to developers to create McMansions on the perphery of our cities. Instead of ensuring that our public transit is properly funded and in a state of good repair, he wastes money on unnecessary highways that won't see traffic for years. He attempts to bribe us with our own money in the form of tax rebates rather than spend that money tackling the issues that matter. And with this early and unnecessary election, he's moved to try and secure his own future at the expense of everybody else's.

It's worth noting that the last time an Ontario premier with a huge majority mandate called an election over a year before they had to, the result was the unexpected election of Ontario's only New Democratic government, even though said government (Liberal) went into the election with a 10-point advantage in the opinion polls.

Might something similar happen this time? Only time will tell. But it's worth remembering that Ontario voters have sniffed out self-serving premiers before, and we are quite willing to punish them for their arrogance.


KW Habilitation

February 10, 2025: What’s Happening in Your Neighbourhood?

Coffee Club Game Night
Monday, February 10
7:00 PM – 9:00 PM
FREE
KW Habilitation – 99 Ottawa St. S, Kitchener

Bring your own board games, cards, Jenga or whatever you want to play. Come chat and hang out with a great group of people. We will have Boy Meets World playing in the background in case we need something to watch. This event is hosted by the Waterloo Region Family Network and takes place at KW Habilitation on Ottawa St. Come and have a fun time with us at Coffee Club and check out what other Events we have going on.

Click here for more info

 

♦Karaoke Pub Night
Tuesday, February 11
6:00 PM – 7:00 PM
$40
Edelweiss Tavern – 600 Doon Village Rd, Kitchener

We’re going back to Edelweiss tavern for this exciting karaoke night! Bring your friends and family and sing the night away at Edelweiss Tavern, where you can buy drinks of your choice, including near-beers! Snacks will be provided!

Click here for more info

 

 

♦BINGO/Trivia
Thursday, February 13
3:00 PM – 4:30 PM
FREE
Health Caring KW – 44 Francis St. S, Kitchener

Join this fun and educational Bingo group, where we explore a new topic each week while enjoying snacks and friendly competition. It’s a great way to learn something new, socialize, and have a chance to win prizes! Health Caring KW is a great place with lots of free events happening. You can always check out their Events Calendar for more fun things to do at Health Caring KW.

Click here for more info

 

“Avatar” Free Indoor Screening♦
Tuesday, February 11
6:00 PM – 8:50 PM
FREE
Tapestry Hall – 74 Grand Ave. Cambridge

Join us for a special Valentine’s week “Big Screen Tuesday”! On Tuesday, February 11th we will have a FREE screening of “Avatar (2009)” on Tapestry Hall’s screen! There will also be Meander Shows to enjoy at 5:30 PM and 8:50 PM. Our concession stand can be found at Grand Hall’s bar and will be open throughout the film. Outside food and beverage permitted. No outside alcohol allowed.

Click here for more info

 

Coffee Club Event: Mocktails and Painting – Register Ahead
Monday, February 24
7:00 PM – 9:00 PM
$5 – Registration Required
KW Habilitation – 99 Ottawa St. S, Kitchener

Join Kim and Sandra for Mocktails and Painting. Cost is $5 per person. Please RSVP by Friday, February 14 by sending an email to carmen.sutherland@wrfn.info

Click here for more info

 

Agatha Christie’s Murder on the Orient Express

Friday, February 14 and Saturday February 15
Friday 7:00 PM and Saturday 2:00 PM and 7:00 PM
$35.20
Registry Theatre – 122 Frederick St. Kitchener

Come see an exciting and classic murder mystery show at the Registry Theatre presented by Playful Fox Productions. Just after midnight, a snowdrift stops the Orient Express in its tracks. The luxurious train is surprisingly full for the time of the year, but by the morning it is one passenger fewer. An American tycoon lies dead in his compartment, stabbed multiple times, his door locked from the inside. You are sure to enjoy all of the twists and turns this narrative has to offer. Grab your tickets today.

Click here for more info

 

Sketches n’ Sips
Saturday, February 15
4:00 PM – 8:00 PM
FREE
Wave Maker Craft Brewery – 639 Laurel St. Cambridge

Celebrate love in all its forms with us at Wave Maker Craft Brewery on Saturday, February 15th for our Sketches & Sips Valentine’s Event! Los Rolling Tacos will be serving up their delicious eats from 4-8pm. Caricature Artist AJ Manzanares will be here from 5-7pm creating fun, personalized sketches for just $10 per face!

Click here for more info

 

Sweetheart Drum Social
Saturday, February 15
6:00 PM – 10:00 PM
FREE
White Owl Native Ancestry Association – 65 Hanson Avenue Kitchener

This will be a potluck style event, so bring your feast bundles! Charging Horse will be jamming some deadly tunes throughout the evening! All drums are welcome. Come on out for an evening of singing, dancing, munching and spending time with friends and community. You are welcome to bring your crafts to work while hanging out! Big drums please email Aryawna – aryawna@wonaa.ca so we can make sure we set space up for you during set up!

Click here for more info

 

KW Little Theatre

KW Little Theatre is a volunteer run theatre in Waterloo. They pride themselves on being an entry point for anyone wanting to work in theatre. KWLT holds auditions that are open to anyone, regardless of experience.

They have some Great Shows coming up at affordable prices. Their newest show Pippin opens Thursday, February 20 at the Registry Theatre – 122 Frederick St. Kitchener. Tickets are just $25 and cost even less if you have a membership.

You can also check out their Volunteer Page for current volunteer opportunities to get involved! You can send an email to info@kwlt.org and let them know how you’d like to contribute. Whether it is behind the scenes or centre stage, this community theatre wants to welcome you!

The post February 10, 2025: What’s Happening in Your Neighbourhood? appeared first on KW Habilitation.


Code Like a Girl

Five Ways for New Coders to Turn Mistakes into Mastery

Avoiding common mistakes will help you master coding quickly.

Continue reading on Code Like A Girl »


Code Like a Girl

A Week of Isolation Made Me a Better Coder — and a Much Worse Communicator

How programming alone rewired my brain in ways I didn’t expect

Continue reading on Code Like A Girl »


KW Predatory Volley Ball

Congratulations 14U Altius Strong. 15U McGregor Cup Trillium A Gold

Read full story for latest details.

Tag(s): Home

KW Music Productions

Announcing Our 2025 Fall Show!

♦ The twister is coming and it’s bringing you Oz.

We’re excited to share that The Wizard of Oz will land in the Humanities Theatre this November. Familiar faces, unforgettable moments and a journey like no other in musical theatre history.

Creative call and audition information will be coming soon. Keep an eye on our socials and check back on our website to stay informed about all things Oz!

We can’t wait to see you this November.

Performances will take place at the Humanities Theatre at UWaterloo from November 27-30.

The Wizard of Oz 
by L. Frank Baum
With music and lyrics by Harold Arlen and E.Y. Harburg 
Background music by Herbert Stothart
Dance and vocal arrangements by Peter Howard
Orchestration by Larry Wilcox
Adapted by John Kane for The Royal Shakespeare Company

Based upon the classic motion picture owned by Turner Entertainment Co. and distributed in all media by Warner Bros.

 

 

 

The post Announcing Our 2025 Fall Show! appeared first on K-W Musical Productions.


KW Predatory Volley Ball

Congratulations 15U Elite. McGregor Cup Select B Gold

Read full story for latest details.

Tag(s): Home

KW Predatory Volley Ball

Congartulations 14U Rampage. 15U McGregor Cup Trillium White C Bronze

Read full story for latest details.

Tag(s): Home

The Backing Bookworm

The Fall Risk



Short and oh-so-sweet!
The Fall Risk is a short story (a mere 82 pages long) that centres around Seth and his neighbour Charlotte stuck on their second story apartment building for three days when the outer stairs are demolished earlier than planned. 
This is a super cute read but you're gonna want to put your logical cap aside. I still can't visualize how the stairs to this apartment fiasco looked, but I did enjoy this brief RomCom that has a couple of popular tropes (forced proximity, Insta-Love - which was sweet and works!), funny banter and secondary characters in Gabe and Izzy who almost steal the show.
Jimenez provides a trigger warning up front (for those who prefer to read them - I didn't) and I appreciate how she addresses, if briefly, the scary issue that impacts Charlotte's life. 
I finished this short book with a smile on my face and a desire to get a fish just so I can name him Swim Shady.
Disclaimer: I received this digital book free of charge as part of the Amazon First Reads program in exchange for my honest review.

My Rating: 4 starsAuthor: Abby JimenezGenre: Romance, Short StoryType and Source: ebook from Amazon First Reads programPublisher: Amazon Original StoriesFirst Published: March 1, 2025Read: Feb 6, 2025

Book Description from GoodReads: Two good neighbors make the best of a bad Valentine’s Day in a funny and improbably romantic short story by the #1 New York Times bestselling author of Just for the Summer.
It’s Valentine’s Day weekend, and Charlotte and Seth are not looking for romance. Armed with emotional-support bear spray, Charlotte is in self-imposed isolation and on guard from men. Having a stalker can do that to a person’s nerves. Just across the hall and giving off woodsy vibes is Seth, a recently divorced arborist. As in today recently. Heights, he’s fine with. Trust? Not so much. But when disaster traps them one flight up and no way down, an outrageously precarious predicament forces a tree-loving guy and a rattled girl next door to embrace their captivity. Soon their defenses are breaking away. Considering how close they both are to the edge, Charlotte and Seth could be in danger of falling—in love.