WRDashboard

Fork Me on Gitlab

Articles

Code Like a Girl

Not All Great Work is Loud

Rethinking Visibility in Tech Teams

Continue reading on Code Like A Girl »


James Davis Nicoll

Justified In The End / Ryuu no Gakkou wa Yama no Ue Kui Ryouko Works By Ryōko Kui

Ryoko Kui’s 2011 Ryuu no Gakkou wa Yama no Ue Kui Ryouko Works ( The School of Dragons is on the Mountain — Ryoko Kui Works) is a collection of ten short spec-fic stories, some which are connected by the question ​“what follows a successful quest?”


Cordial Catholic, K Albert Little

The Catholic Response in a Time of Crisis (w/ Fr. Donald Haggerty)

-/-

Code Like a Girl

From Fangirl to Final Gatekeeper: My Apple Software Journey

The advancement in technology and the rise of the modern computing industry coincided with my lifetime. I have experienced life with and without cell phones and computers; the transition from one phase to another helped me realize the importance of technological improvements and how they have redefined the way we live.

With the recent breakthroughs in Artificial Intelligence and Machine Learning, we are transitioning into yet another big phase of the field. The prospect of being involved in such a rapidly advancing field excites me beyond compare.

♦Project Idea: Health monitoring-based iOS app

While I was pursuing undergraduate studies in Computer Science in India, I made a sincere effort to go the extra mile; my group’s final year project was an iOS-based graphical data analytics system for monitoring an individual’s health. At the time, we faced several challenges as iOS, Data Analytics, or Cloud computing were not offered in our curriculum, and my group was the only one working on it.

Our concept was to develop a health monitoring platform with a wearable device that could detect your vitals and upload them via cloud computing to a database accessible by doctors across the globe. Additionally, we also attempted to show the analysis of the medical data via histograms, line charts and pie charts. We also self-taught ourselves any tools we had to use, like VMWare, Xcode, etc and the programming language Swift, which was first introduced when we conceived the idea for the project.

Furthermore, our review paper on Data Analytics: Improving Health and Economy was published in the International Journal of Innovation in Engineering and Technology (IJIET). These achievements, combined with my ability to produce quality results with minimal training, made me stand out during my placement as an iOS Developer.

As an iOS developer, I have worked on many iOS applications dealing with live tracking, chatting modules, database management via Apple’s Core Data, synchronizing apps with wearable devices and many more basic features as well. Being a lead developer in a few projects, I have had the experience of understanding the project requirements, explaining them and producing them along with the team and delivering them on time as well.

I learned and enjoyed working on the prototypes of the application while incorporating fun UI elements and adding different gestures and animations. It gave me a completely different outlook on the apps I was using and made me even more inquisitive and comprehensive.

♦Image from LetsNurture.com’s blog archives

On the other hand, I was also a developer for Amazon’s Alexa skill development at the organization. I learned yet another stack of technology while still getting used to the iOS app dev tech stack.

For Alexa, I worked with Alexa Skills Kit, AWS and Node.js. The whole concept was still very novice back in India in 2018, and only a few people had a good understanding of the project lifecycle of Alexa skills. Thus, I handled client communication along with developing and publishing the skills. The skills ranged from city tour guide to audio streaming to IOT assistant.

These experiences further stimulated my inclination towards subjects dealing with Artificial Intelligence. To be specific, I looked forward to study Artificial Intelligence, Machine Learning, Pattern Recognition and many more such courses at the university.
At that stage, I believed that obtaining a Masters degree would catalyze my growth and put me on the right path to develop technology-based solutions to help tackle depression, other health problems and serve education. This program would not only motivate me to perform but will also propel me closer to my goals and ambitions.

Shortly after starting the Master's program in the US, I began preparing for SDE internship opportunities while leveraging the strong theoretical foundation I gained in my Master’s program — think Data Structures, Algorithms, and Database Design.

Integrating my existing skills with a deeper grasp of core CS principles and a thoughtful approach to applications and interviews significantly enhanced my understanding of the software development landscape and helped me secure an internship at Apple, among a few other companies.

Ever since I was captivated by the potential of technology to build a better world, Apple products have held a special allure. That initial admiration fueled my journey into the world of software development. As an Apple admirer, stepping into Apple as an intern was a dream come true.

Even though I had been an iOS app developer, working with the iOS Accessibility team as a Software Engineer was a very different experience. I had to use a lot of internal tools, get familiar with a legacy codebase and work with a new programming language, Objective-C. I am grateful to have worked with this team on Switch Control, and it gives me immense content and happiness that I was able to work on something that contributes to making someone’s life better.

I would like to take this opportunity to emphasize how Inclusivity and Accessibility support are essential. Also, accessibility is about personalization as well.

Unfortunately, we saw the tech industry take a big hit and a bad downfall soon after I completed my internship. I could not return back to Apple at the time, however, the experience was unparalleled and I cherish the memories till date. Also, even when it was difficult to graduate without an offer letter, this experience helped me show my resilience, diligence and determination.

This in turn helped me to get an interview at Barclays, where I was hired as a Full-stack developer. This role was a completely different one, where I learned another set of tools and programming languages while understanding the structure and architecture of web apps. As I was working with a team responsible for developing the customer-facing website, it was very important to stay consistent and meticulous with whatever tasks I was doing.

As interesting and valuable as the Barclays experience was, full-stack development was not something I resonated with as much as I did with iOS and Apple in general. After a series of application and interview gigs, I am now back at Apple with the Release Validation Engineering team. With this team, I help ensure that all major and minor updates are shipped without any issues for all the platforms.

Drawing on my relevant experience as both a developer and an engineer, I earnestly look forward to continued growth and many more endeavors in Software Engineering.

From Fangirl to Final Gatekeeper: My Apple Software Journey was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

From iOS Dev to Apple Intern: My Journey Through the Interview Maze

Hey everyone! I wanted to share my experience navigating the exciting world of SDE intern interviews back in the Fall 2021.

After completing my Bachelor’s in Engineering with a major in Computer Science and subsequently working for a couple of years as an iOS app developer, I decided to broaden my horizons within the CS field. This led me to pursue an MS in Computer Science in the US, and shortly after starting the program, I began preparing for SDE internship opportunities while leveraging the strong theoretical foundation I gained in my Master’s program — think Data Structures, Algorithms, and Database Design.

Now, with that academic toolkit ready, the job hunt began. It felt like a multi-pronged attack, and here’s a peek into my strategy.

Casting a Wide Net: My Application Approach
I wasn’t shy about putting myself out there. My application strategy looked something like this:

  • Tailored is Key: Every application felt personal. I meticulously tweaked my resume to highlight the skills and keywords relevant to each specific role. No generic submissions here!
  • The Usual Suspects (with a twist): I diligently scoured mainstream job boards, university forums, and company career pages. But I didn’t stop there.
  • Direct Company Connection: I made it a point to visit the websites of companies I genuinely admired, seeking out suitable intern openings.
  • Networking with Purpose: LinkedIn became my friend. I actively engaged with posts from recruiters and hiring managers, offering thoughtful comments and making connections. This applies to all the networking sites active at the time, be proactive and engage wherever possible.
  • Strategic Outreach: I wasn’t afraid to search for individuals with relevant keywords in their profiles (think “Swift,” “iOS Engineer” even for SDE roles, and of course, “Recruiter”). I crafted concise, informative messages introducing my background and expressing my interest.
  • Conference Engagement: Attending industry conferences like the Grace Hopper Celebration (GHC) provided opportunities to network with recruiters and company representatives directly. These events often have career fairs and dedicated sessions for students and early-career professionals.
  • Persistence Pays Off: The job hunt requires resilience. I made sure to follow up consistently and keep the lines of communication open.

Sharpening the Sword: My Technical Skill Development
My MS coursework laid a solid foundation, but I knew I needed to get back into the competitive coding groove. Here’s how I tackled it:

  • Revisiting the Classics: I dusted off my notes and solutions from my Data Structures and Algorithms class. Those projects and problems were gold!
  • Building a Base: I supplemented this with structured coding prep courses on platforms like InterviewCake and GeeksForGeeks. These helped refresh fundamental concepts.
  • Leetcode Immersion: Leetcode became my training ground. To make it less overwhelming, I started by focusing on my strongest language because I knew that showcasing my problem-solving approach and logical reasoning was paramount. The interviewer often cares more about how you think than just getting the right answer in a specific language. I remember tackling basic problems like “Two Sum.” Understanding how a dictionary could optimize the solution from a brute-force approach to O(n) was a key learning moment. The focus was always on identifying the brute-force method first, then optimizing while analyzing the time and space complexity.
  • Curated Practice: As I progressed, I started creating targeted lists of problems based on data structures (arrays, linked lists, trees, etc.), design patterns, common problem-solving techniques, and frequently asked interview questions.
  • The Interview Prep Sprint: Once interview calls started rolling in, I would either revisit existing curated lists or create new ones tailored to the specific role. This intense, focused practice helped build confidence and familiarity with the interview question format. Solving a bunch of problems in those lists right before an interview really helped me feel prepared.

Beyond the Code: The Human Element
Technical prowess is crucial, but I realized that landing an internship is also about fit.

  • Personality Matters: Interviews aren’t just about code; they’re about assessing how you’d integrate into a team. Authenticity is key.
  • Communication is King: All that hard work solving problems needs to be effectively communicated. I consciously worked on articulating my thought process clearly and concisely. Those introductions? Keep them brief but impactful, highlighting your relevant experience and enthusiasm.
  • Honesty and Self-Awareness: It’s okay if you’re not the perfect fit for every role. Understanding that different roles have different demands is important. Be honest about your strengths and areas for growth.
  • Embrace the Journey: Not every interview will be a success. Learn from each experience and keep pushing forward.

The journey from iOS development to securing and navigating an SDE internship in the Fall 2021 was challenging yet incredibly rewarding. By combining my previous experience with a renewed focus on core computer science principles and a strategic approach to applications and interview preparation, I gained invaluable experience and a clearer understanding of the software development landscape. Hopefully, sharing my experience offers some insights for those embarking on a similar path!

From iOS Dev to Apple Intern: My Journey Through the Interview Maze was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elmira Advocate

"THE CLEANUP IS A SHAM" - IS THE GLASS HALF FULL OR HALF EMPTY?

 

Way back in the mid 1990s I put a large sign on the roof of my car which read "The Cleanup is a sham". Now I was mostly referring to the ongoing (and still going) DNAPL scam and sham. I honestly back then had no idea of the perfidy, ruthlessness and lengths to which our authorities were willing to go to protect their own as**s versus those of their constituents. 

Thirty years later I understand that my criticisms were in fact understated and muted but they were based upon the facts that I had at the time.  Here is the perspective of where we are today versus in the 1990s. The Canagagigue Creek has been studied , sampled and monitored to the point that we know that PCBs, mercury, DDT and dioxins/furans are in Creek soils (creekbanks), Creek sediments (Bottom of the Creek) and in fish tissues all above health standards. Other than mercury the chemicals involved including PAHs (polyaromatic hydrocarbons), DDT, dioxins/furans and PCBs are all Persistent Organic Pollutants (Pops).  Nothing has been remediated or removed from the approximately five miles of downstream Creek from Uniroyal/Lanxess all the way to the Grand River just south of West Montrose.

Small areas on the former Uniroyal site have been excavated such as RPE 4 & 5 on the east side of the Creek. The only real improvement in my opinion is the Elmira aquifers. As miserable and relatively cheap as Pump & Treat is, it does appear to have substantially lowered the concentrations of NDMA , chlorobenzene and presumably all the other solvents etc. in the aquifers. The issue for me and other unbiased but informed citizens is the bulk of liquid contaminants just upgradient on the Lanxess site. If the company sells or cuts and runs and the on-site pumping stops then we are in BIG trouble. Already they are cutting back on on-site pumping and the majority of twits on TRAC are saying nothing. Either they don't know or they don't care or both. 

Trust the Ministry of Environment (MECP) to step in, in that case, is the mantra of our politicians and other  self-serving crooks and fellow travellors.  Absolutely not. The MECP got us into this mess and they have never done their duties appropriately here in Elmira, Ontario.  Lying to and misinforming the public is NOT supposed to be their mandate but as practice makes perfect they have indeed approached perfection.


Github: Brent Litner

brentlintner pushed to master in brentlintner/vim-settings

♦ brentlintner pushed to master in brentlintner/vim-settings · May 20, 2025 18:10 2 commits to master
  • 250db82
    Package updates
  • e1db868
    Re-enable providers why not?

Cindy Cody Team

Build Wealth With A Smart Real Estate Investing Strategy

Money is a vital topic for many families. It’s not just about meeting today’s needs, but about building a secure future for your children and grandchildren. Real estate investment is one of the most powerful tools for creating long-term financial stability and setting an example of smart decision-making for the next generation. By learning how to grow wealth through property, you can help lay the foundation for lasting prosperity and generational security.

If you’re considering building wealth through real estate but feel uncertain about where to start, you’re not alone. We are passionate about guiding individuals through their investment journeys. As experienced investors ourselves, we love to share our knowledge (bringing both a personal and a professional perspective) so others can succeed.

Why real estate?

When people think about wealth, they often focus on their immediate financial goals—paying off debt, saving for retirement, or buying a home. While these are all important, building generational wealth means taking a longer view. It’s about creating financial stability that can be passed down to your children and grandchildren, giving them a head start and the freedom to make empowered choices in life. One of the most effective ways to do this is through real estate.

Unlike some investments that can be volatile or short-term, real estate has the potential to appreciate over time while also generating income through rental properties.

Here’s a key tip:

Focus on cash flow first. Look for properties that generate positive monthly income after all expenses. This provides stability and options as you grow your portfolio.

A well-chosen property can provide consistent cash flow, build equity, and increase in value, all while offering tax advantages. When approached with a long-term mindset, real estate becomes more than just a place to live, it becomes a vehicle for financial legacy.

One of the key benefits of using real estate to build generational wealth is its tangibility. A property can be seen, touched, improved, and leveraged in multiple ways. It can be refinanced to fund other investments, passed down through an estate plan, or serve as a long-term income stream for future generations. Teaching your children how real estate works—how to evaluate properties, understand the market, and manage finances—can instill valuable financial literacy that lasts a lifetime.

It’s also important to note that real estate doesn’t require you to be a millionaire to get started. Many families begin with a single home, living in part of it while renting out the other unit, or investing in a small duplex or condo. Over time, this can lead to acquiring more properties and creating a diversified real estate portfolio. Even modest investments, when managed wisely, can grow significantly and provide a strong foundation for the next generation.

Building wealth takes a smart strategy.

Of course, building generational wealth isn’t just about accumulating assets, it’s also about planning. Setting up the right legal and financial structures, such as wills, trusts, and insurance policies, ensures that your wealth is protected and transferred efficiently. Working with professionals, including a real estate advisor, lawyer, and financial planner, can help you make the most of your investments and protect your family’s future.

With the knowledge we’ve gained from years of being investors ourselves, we’re able to help guide others through the process. Think of us as your trusted advisors. With your goals in mind, we can help you make informed decisions about your investment.

At its core, generational wealth through real estate is about intention. It’s about making informed decisions today that will benefit your family tomorrow. Whether you’re purchasing your first property or expanding your investment portfolio, every step you take contributes to a legacy of opportunity, security, and financial well-being for those who come after you.

Have questions about real estate investing? Give us a call today

Want to learn more?

Check out this blog for more tips for building generational wealth with real estate.


Brickhouse Guitars

Brickhouse Guitars at Boucher Guitars 20th Anniversary Celebration Part 3 - Logs to Tops w/ Robin B

-/-

Cindy Cody Team

How to Keep Your Home Cool and Energy Efficient During Ontario Heatwaves

Ontario summers can bring more than just sunny skies. They often deliver intense heatwaves that can leave your home feeling like an oven and your energy bills soaring. In Kitchener-Waterloo, where older homes and newer builds alike face the challenge of rising temperatures, keeping your home cool without sacrificing energy efficiency is key.

Whether you’re staying put or preparing your home for the market, here are practical tips to beat the heat while staying energy smart.

Keep reading for activities and fun ways to stay cool during a heatwave.

1. Make Use of Window Coverings

Keep blinds, curtains, or shades closed during the hottest parts of the day, especially for south- and west-facing windows. Consider blackout curtains or thermal window coverings to block out sunlight and reduce indoor temperatures by several degrees.

Pro tip: Reflective window film or solar screens can help deflect UV rays and reduce heat gain without darkening your home.

2. Maximize Natural Ventilation (When Possible)

On cooler evenings or early mornings, open windows on opposite sides of your home to create cross-breezes. This can help flush out trapped heat and bring in fresh air without turning on the AC.

Note: Be sure to close windows and blinds once the outdoor temperature starts to rise again.

3. Seal and Insulate

Gaps around windows, doors, and attic hatches allow hot air in and cool air out. Sealing these leaks and adding insulation, particularly in the attic, can significantly reduce your need for air conditioning.

This upgrade not only helps during heatwaves but also improves your home’s year-round efficiency, boosting its value when it’s time to sell.

4. Use Fans Wisely

Ceiling fans should spin counterclockwise in the summer to push air downward, creating a wind-chill effect. Portable fans can also be placed near windows or hallways to improve air circulation.

Pro tip: Use an exhaust fan in the bathroom or kitchen to vent hot air out after cooking or showering.

5. Avoid Heat-Generating Activities Indoors

Limit the use of ovens, stoves, dishwashers, and dryers during the hottest hours of the day. Opt for BBQs, air fryer meals, or cold dishes like salads and smoothies.

Energy-efficient tip: Run large appliances during off-peak hours (after 7 p.m.) to save on energy bills.

6. Consider a Smart Thermostat

Smart thermostats allow you to schedule temperature changes based on your daily routine and the weather forecast. They can reduce unnecessary cooling when no one’s home and help maintain comfort during peak heat hours.

Rebates may be available through Ontario’s energy efficiency programs. It’s worth checking out if you’re looking to upgrade!

7. Landscape for Shade

Strategically planting trees, shrubs, or installing shade structures around windows can help block direct sunlight and lower your home’s cooling load over time. In Kitchener-Waterloo’s many family-friendly neighbourhoods, thoughtful landscaping also boosts curb appeal.

8. Get an AC Tune-Up or Upgrade

If your air conditioner is more than 10 years old, it may not be operating efficiently. A seasonal tune-up or an upgrade to an ENERGY STAR® rated unit can help reduce your cooling costs and extend the system’s lifespan.

Selling soon? An energy-efficient cooling system is a great feature to highlight in your home listing.

Stay Cool, K-W!

Kitchener-Waterloo is no stranger to hot, humid summers, but with the right strategies, your home can stay comfortable and energy-efficient all season long. Whether you’re a long-time homeowner or preparing your home for sale, these tips can help you beat the heat without breaking the bank.

Looking for more advice on how to boost your home’s value or prepare it for today’s market? Our real estate team is always here to help.

10 Fun Ways to Stay Cool During a Heatwave

1. Water Balloon Battles
Cool off and get active with a good old-fashioned water balloon fight in the backyard. Kids and adults alike can join in the fun. Just be sure to clean up the pieces after!

2. Visit a Local Splash Pad or Pool
Kitchener-Waterloo has plenty of free splash pads and public pools. Try McLennan Park, Victoria Park, or Waterloo Park for a refreshing afternoon outing.

3. Make DIY Popsicles
Blend fruit juice, yogurt, or smoothie mix and freeze in molds for a cold treat. It’s fun, easy, and healthier than store-bought options.

4. Turn On the Sprinkler
A simple sprinkler in the yard can become the centre of hours of fun. Bonus: your lawn gets watered too!

5. Have an Indoor Movie Marathon
Keep the curtains drawn, grab some icy drinks, and settle in for a cool movie day. Choose summer-themed flicks or family favourites.

6. Freeze Your Bedding
It may sound funny, but try putting your pillowcases or sheets in the freezer for a few minutes before bedtime. It’s surprisingly soothing and helps you fall asleep faster on hot nights.

7. Try a Cold Foot Bath
Fill a tub with cold water and soak your feet while reading or watching TV. It’s a quick way to cool your whole body.

8. Enjoy Frozen Treats from Local Shops
Take a break from the heat and support local businesses like Four All Ice Cream for something frosty and delicious.

9. Host a “Chill” Game Night
Play board games or cards indoors with the AC or fans on, and serve icy drinks and snacks. It’s a social way to stay out of the sun.

10. Create a DIY Indoor Fort with Fans
Build a cozy blanket fort with the kids and add a fan or mini AC unit for an air-conditioned hideout. It’s a cool twist on a childhood classic.

Stay cool!


Grand River Rocks Climbing Gym

Bouldering 101 – Waterloo

The post Bouldering 101 – Waterloo appeared first on Grand River Rocks Climbing Gym.


Children and Youth Planning Table of Waterloo Region

Travel blog: CYPT at the Child Rights Academic Network

In April, the Children and Youth Planning Table of Waterloo Region joined the Landon Pearson Resource Centre for the Study of Childhood and Children’s Rights (LPC) in Carleton University, Ottawa, for its 2025 Child Rights Academic Network (CRAN) meeting.

 

Through annual CRAN meetings, the LPC brings together a range of professionals from academia and community organizations who are dedicated to children’s rights. Together, they collaborate to develop and implement actions that address issues important to children and young people. The theme for this year’s meeting was Building our Communities of Care.

 

Goranka Vukelich (former Co-Chair of the CYPT), Jahmeeks Beckford (Play Lead) and I got a chance to share our experiences using LPC’s Shaking the Movers (STM) framework for child engagement for the first time. 

 

CYPT partnered with both the LPC and the International and Canadian Child Rights Partnership (ICCRP) to pilot the Shaking the Movers Waterloo Region: Child Voice Project on April 5, 2025. Together with other colleagues across Canada who had also organized STM workshops, we reflected on the valuable insights children shared about a caring community. We also considered the challenges raised, suggested solutions, and the calls to action by young people. The climax of the CRAN experience for me was the collaborative session we had to brainstorm and finalize a communiqué to be shared with G7 leaders about the concerns of children and youth. It’s inspiring to see an immediate opportunity for the voices of young people from different corners of Canada to reach the ears of global leaders. And the voices of Waterloo Region children get to be a part of it!

 

Apart from discussing our Shaking the Movers Report and sharing about our great youth engagement work in Waterloo Region, we also got a chance to connect with great people who are doing similar work across Canada and internationally.

 

Participants at the CRAN meeting appreciated Waterloo Region’s pilot project as it brought perspectives from children aged 6-8, which was the youngest group out of all the 2025 STM workshops across the country. The activities our youth facilitators created to engage the children were also a delight. After sharing our presentation of the STM workshop on day one, we were invited to the STM 2026 mapping meeting to share our insights on the direction of the next STM. 

 

The Ottawa CRAN meeting was certainly an inspiration and validation of our work to keep amplifying and advocating for the voices of children and youth in our community. 

 

Absolutely thankful for all children, youth, families, and partner organizations who work together to make CYPT’s work effective in our community. 

 

Solami Okunlola
Child Engagement in Systems Lead
Children and Youth Planning Table of Waterloo Region

The post Travel blog: CYPT at the Child Rights Academic Network appeared first on Children and Youth Planning Table.


Aquanty

Staff Research Highlight - A dynamic meshing scheme for integrated hydrologic modeling to represent evolving landscapes

Hwang, H.-T., Park, Y.-J., Berg, S. J., Jones, J. P., Miller, K. L., & Sudicky, E. A. (2025). A dynamic meshing scheme for integrated hydrologic modeling to represent evolving landscapes. In Science of The Total Environment (Vol. 976, p. 179129). Elsevier BV. doi.org/10.1016/j.scitotenv.2025.179129

“This scheme enhances the accuracy and reliability of subsurface flow and transport simulations in HydroGeoSphere (Aquanty Inc., 2023) by incorporating temporal variations in topography caused by anthropogenic activities. Importantly, the innovative aspects of this newly suggested scheme extend its applicability beyond HydroGeoSphere, positioning it as a valuable solution for addressing challenges.”
— Hwang, H.-T., et al., 2025

CLICK HERE TO READ THE ARTICLE.

GRAPHICAL ABSTRACT. Illustrative example of the time-varying topographic model with the dynamic meshing scheme: a) excavation and b) material placement processes.

We’re pleased to highlight this publication (co-authored by Aquanty’s Hyoun-Tae Hwang, Steve Berg, Killian L. Miller, and Edward A. Sudicky) which introduces a novel dynamic meshing scheme for integrated hydrologic modelling to better represent evolving landscapes. The approach addresses a major challenge in modelling human-altered environments, particularly in regions undergoing rapid changes such as open-pit mining sites, land reclamation zones, or urban developments. Traditional hydrologic models often rely on static mesh geometries, limiting their ability to capture changes in topography and subsurface structure over time. This research proposes a more flexible, adaptive framework capable of simulating surface and subsurface hydrologic responses to complex engineering activities.

Figure 4. Locations of aggregate mining sites in the Grand River Watershed with the Lower Nith River subwatershed (inset).

In this study, the dynamic meshing scheme was implemented within HydroGeoSphere (HGS), a fully integrated surface-subsurface hydrologic modelling platform developed by Aquanty. HGS’s governing equations solve for variably saturated subsurface flow and surface water routing using the control volume finite element method, making it well-suited for simulating coupled processes across changing terrain. The new meshing strategy allows for temporal updates to the geometry of the model— such as excavation and material placement— by adjusting nodal elevations and element configurations dynamically throughout a simulation. This capability enables more realistic representations of how large-scale anthropogenic activities alter hydrological connectivity and storage within the landscape.

To validate the method, the researchers performed benchmark tests under idealized surface and subsurface flow conditions, as well as a proof-of-concept application simulating aggregate mining operations in the Lower Nith River subwatershed of Ontario’s Grand River watershed. In these scenarios, excavation and backfilling operations were modelled over a multi-year period, with HGS capturing resulting changes in groundwater levels and surface water depths. The simulations revealed that while surface water systems tend to recover quickly after restoration activities, groundwater systems can exhibit more persistent disturbances due to altered subsurface flow paths and material properties.

Figure 7. Spatiotemporal evolution of surface-subsurface hydrologic conditions during the mining operation of Case 2: a), c) and e) for the surface water system and b), d), and f) for the groundwater system along cross-section A-A’.

This work underscores the importance of using adaptive modelling techniques when assessing environmental impacts of dynamic land use and engineering interventions. By integrating the dynamic meshing scheme with HGS, the research presents a powerful tool for evaluating hydrological responses to evolving terrain and supports more robust planning for sustainable resource and infrastructure management. This advancement expands the potential applications of HGS to include more complex, real-time engineering scenarios, providing critical insights for both the hydrological modelling community and environmental decision-makers.

Abstract:

The influence of human activities on water resources has gained significant attention from water resource regulatory authorities, stakeholders, and the public. Anthropogenic activities, such as alterations in land use, agricultural practices, and mining operations, have a profound impact on the sustainability and quality of both surface water and groundwater systems. Evaluating the influence of a continually evolving engineered environment on surface water and groundwater systems demands the utilization of adaptive landscape models that can consider changing surface and subsurface topography, geometry, and material properties. Typically, fully integrated hydrologic models have been employed to analyze alterations in water availability and quality resulting from variations in climatic conditions or water extraction. In such scenarios, the structural framework of the model remains constant, with adjustments typically made to boundary conditions or material parameterizations during simulations. However, in cases of substantial landscape transformations, such as urban development, industrial expansion, and open-pit mining, accurately representing these changes in models becomes challenging due to the limitations of fixed model geometry in capturing dynamic shifts in surface water and groundwater systems. This study presents a dynamic meshing scheme integrated into the surface-subsurface model, HydroGeoSphere. The accuracy of the evolving-landscape model was verified by comparing it against groundwater seepage patterns in static hillslope conditions, demonstrating strong agreement with previous studies. Furthermore, we present a proof-of-concept application of the dynamic meshing scheme in synthetic open-pit mining sites located in the Lower Nith River subwatershed within the Grand River Watershed, Canada, effectively capturing time-dependent engineering configurations in an integrated surface-subsurface model.

“Although HydroGeoSphere currently has the capability to update spatiotemporal changes in land use and cover types, as well as evapotranspiration conditions—including evapotranspiration zones, land use/cover zones, Manning roughness coefficients, and leaf area index—parameters such as evaporation, transpiration, and vegetation influence were not primary considerations in this study, which focused on demonstrating the implementation of the dynamic meshing scheme.”
— Hwang, H.-T., et al., 2025

CLICK HERE TO READ THE ARTICLE.


Code Like a Girl

Decision Trees: A Powerful Tool in Machine Learning

They handle both classification and regression tasks

Continue reading on Code Like A Girl »


James Davis Nicoll

The Challenge Of Our Day / That Leviathan, Whom Thou Hast Made By Eric James Stone

Eric James Stone’s 2010 That Leviathan, Whom Thou Hast Made is a Nebula-Award-winning novelette.

For most of humanity, Sol Central Station figures as the waystation to the stars. For Harry Malan, it is where he presides over a small Mormon ward. Malan’s congregation may be small, but it is noteworthy because most of the members are aliens, plasma beings that humans call swales.

To Malan’s distress, not only is station-mate Dr. Juanita Merced a Gentile (so she can be no solution to Malan’s celibate solitude), she is a godless Gentile who opposes Malan’s good works.


Code Like a Girl

Convert Figma designs to code with Bolt AI

Explore how to use your own UX design and develop it into an app.

Continue reading on Code Like A Girl »


Code Like a Girl

Break Free from the Motivation Trap Today

Feeling Unmotivated? Try Five Smarter Ways to Reach Your Goals

Continue reading on Code Like A Girl »


Code Like a Girl

Will AI Replace You? How to Avoid the 4 Traps to 10x Your Productivity

Discover how to turn AI from a chaotic assistant into a streamlined team — no code, no burnout.

Continue reading on Code Like A Girl »


Code Like a Girl

The 8 Mistakes That Silently Slowed Me Down at Every New Company

Lessons I’ve Learned From Fumbling, Failing, and Figuring It Out

Continue reading on Code Like A Girl »


Code Like a Girl

How I Ran an LLM Locally in Under 1 Minute

Yes Really!!

Continue reading on Code Like A Girl »

Kitchener Panthers

Vargas fans 11 as Panthers get comeback win

KITCHENER - Despite being down 6-1 after two innings, the Kitchener Panthers didn't quit.

Andy Vargas struck out 11 batters in five innings of work, and the Panthers got five home runs to take a wild 11-10 win over Chatham-Kent at Jack Couch Park Sunday.

"It just shows you his resilience, about not quitting and giving up just because he got off to a rough start," Panthers field manager Pete Kiefer said of his starter.

"He battled back, and just gave us what we needed."

Yunior Ibarra, Arthur Kowara, AJ Karosas and Nick Parsons all went yard, before Yosvani Penalver hit a three-run home run in the eighth to give Kitchener the lead for good.

Every Panther in the starting lineup got a hit.

"(The offense), they could also have quit because it was tough," Kiefer said, alluding to dealing with the switch pitcher Mizuki Akatsuka.

The import went seven innings, giving up 11 hits and eight runs (six earned) for Chatham.

Danny Garcia got the win for Kitchener, while Sam McKinlay was saddled with the loss.

Yankiel Mauris collected his second save of the year, limiting the Barnstormers to two runs in the ninth.

Kitchener improves to 2-1, and hosts Brantford Thursday night at 7:30 p.m.

GET YOUR TICKETS NOW and #PackTheJack for some exciting Kitchener Panthers baseball!

GAMESHEET



KW Predatory Volley Ball

Tryout Window Townhall May 27. Register

Read full story for latest details.

Tag(s): Home

James Bow

Can Russell Stick the Landing? Doctor Who Disney Series Two (so far) reviewed.

♦The image above is courtesy James Pardon at BBC Studios and Bad Wolf.

Whatever happens over the next two weeks, Russell T. Davies has delivered one of the best runs of Doctor Who episodes that I can recall in years. The Robot Revolution was a wild and fun opener that introduced the new companion and geared us up for the series arc. Lux featured a great return of the Pantheon with the Chatotic God of Light Lux. The Well was the terrifying Midnight sequel we never realized we needed. Lucky Day was a brilliantly infuriating story ripped from social media and the Story and the Engine was a compelling tale about the power of story that showed the series could still bring something fresh and new to the table even after sixty-two years.

And with the Interstellar Song Contest, we have a tale that again provides a fantastic and grim allegory of our treatment of people in the Middle East (I don't think it's a coincidence that they riffed on the burning of poppies), before kicking the season arc into high gear with a big revelation and the promise of mayhem to come.

But will it make sense?

The truth is, as fantastic a writer and showrunner that Russell is, he has a tendency to promise the world when it comes to his season finales. Sometimes he lives up to what he promises, but many times he can't. Consider Doctor Who's Disney Series One, which I enjoyed. The season arc is launched with promise around the origins of Ruby Sunday in The Church on Ruby Road. A big confrontation is promised through cryptic remarks by Maistro in The Devil's Chord and the mysterious Twists in the end, before it snaps into play in The Legend of Ruby Sunday when the Doctor decides NOW is the time to explore Ruby's origins, and we muddle around for the big cliffhanger reveal of Sutekh. Ruby's origins prove to be surprisingly mundane (and I can understand and appreciate Russell's reasoning here), but it makes Sutekh's eventual downfall all the more confusing and weak.

This season, Russell T. Davies has done a better job building up the series plot by simplifying it. Instead of, "Where did Ruby Come From?" we have "We need to bring Belinda Home... Except there's no home for Belinda to go back to." There's more focus and less confusion when Mrs. Flood takes on the Susan Twist role throughout the season, and it feels like the resolution will have a pay-off that builds naturally from what comes before, and will be more satisfying as a result.

That is, assuming that Russell can actually stick the landing and provide a logical resolution. And I'm a little leery here due to past experience, and the fact that, at the end if The Interstellar Song Contest, he has chosen the least interesting big revelation that he could have picked.

Hear me out.

Oh, and by the way, spoilers follow from here on out, so if you haven't seen these episodes up to The Interstellar Song Contest, stop reading and start watching, and return here once you catch up. You really do deserve to see these episodes as unspoiled as possible. Russell and company gave us some excellent moments and excellent pay-offs, and they're all the better encountered in the moment. So go away and don't come back until you're prepared.

Back already? Let us continue.

Okay, so: Mrs. Flood is the Rani, and we should have guessed because Floods are caused by a lot of Rain and Rain is an anagram of Rani, haha, Russell you sneaky bastard. The events of The Interstellar Song Contest cause Anita Dobson's Mrs. Flood to bi-generate into herself and Archie Punjabi (it's about time that the character was played by a South Asian actress), and the new Rani leads the way offscreen promising to "bring terror to the Doctor". Meanwhile, the Doctor learns that the Earth disintegrated within seconds on May 24, 2025 and, when trying to bring Belinda back there, the TARDIS explodes.

In saying that bringing back the Rani is the last interesting big revelation Russell T. Davies could have picked, I have no objection to the Rani being involved in this season story arc and the two-part finale. And while she has only been on screen for a few seconds, Archie Punjabi looks ready to blow everyone away. However, if her motivation is to bring terror to the Doctor, my big question is: why?

Why here? Why now? Why him?

The Rani is not the Master, who has such a history with the Doctor throughout the whole show, I'd frankly be surprised (and a little disappointed) if he doesn't show up in the upcoming two-parter. In her previous appearances, she has been portrayed as an amoral researcher looking into the mysteries of the universe with an eye to exploiting them, and with not a single care of whomever could get hurt along the way. She's been annoyed by the Doctor's interference, and has used him on one occasion to try and achieve her goals, but she is not interested in revenge. She rolled her eyes up into her skull at the Master's obsession with fighting the Doctor. If this long game she's been playing is a dish she wants served cold, I have great trouble in buying it.

It also negates those moments where Mrs. Flood showed some compassion and empathy, such as encouraging Ruby to step on board the TARDIS, or her delight at the Doctor defeating Sutekh. Even her willingness to accept Sutekh's temporary win is at odds with this supposed declaration of war against the Doctor.

And is it pedantic of me to ask where Hell she was during the Time War, or any of the other elements and events that fell out around it?

I realize that with this series being as long as it is, it becomes harder and harder justifying why a particular villain waits so long to step out of the shadows and wreck their vengeance against the Doctor, but not doing that cheapens their arrival. There should be a reason for why the Rani is acting now and not before, and I'll be disappointed if none is supplied.

Indeed, why does she have to be the final villain, and why does her intent have to be villainous? There is still plenty of time for Russell to surprise me here (two episodes, in fact). While it may be natural to assume that the Rani is responsible for the Earth's destruction in a week's time, there is absolutely no evidence connecting the two. Maybe she isn't responsible, and she's angry at the Doctor because she believes that he's the one who's responsible, either through something he's done, or something he failed to do.

And given that there are still plot elements that we can still incorporate from the sixtieth anniversary specials, such as the fourteenth Doctor worrying that he let strange forces through by calling upon superstitions and using salt at the End of the Universe (see Wild Blue Yonder), then we have opportunities for some of those elements to play out in the mayhem to come. Perhaps these elements force the Doctor and the Rani into an uneasy alliance. I would find that interesting.

This season needs an extra curve ball in order to produce a resolution that is both effective and unexpected. We did not get that curve ball last year -- Sutekh just showed up, and then he was defeated, causing the series to fall flat -- and I'm worried that this could be another year where Russell is unable to deliver all that the set-up promises. Which would be a shame, because the episodes that have set all this up have been among the best the show has had to offer.

Wish World and Reality War? I'm rooting for you. But you have a big job ahead of you. I'd ask you not to mess things up, but that depends on whether this job is impossible to begin with.

We shall see.


Brickhouse Guitars

Boucher BG-52-M #IN-1361-DB Demo by Roger Schmidt

-/-

Code Like a Girl

Java24 Important Features.

Java 24 is a short-term JDK release that was released in March 2025.
There are several JEP(Java Enhancement Proposal) introduced in Java 24 from being preliminary introduction to permanent part inclusion in JDK24.
Let's try to explore some of the JEPs which are useful for developers in day-to-day life.

♦Java 24 Features

Note: Java 24 doesn't have LTS support. The last version that supported it was in Java 21 released in September 2023. The next long-term support JDK will be Java 25, expected to be released in September 2025.

But wait, why is Java releasing so many versions?

Gone are those days when Java new versions used to be released in 2 years, now the Java platform has adopted 6 6-month cadence to release new features to keep the platform modern, predictable, and easier to evolve.

There are four categories of JEPs: experimental JEPs, incubator JEPs, preview JEPs, and permanent JEPs.

1. Language Features

1.JEP 488: Primitive Types in Patterns, instanceof, and switch (Second Preview)
2.JEP 492: Flexible Constructor Bodies (Third Preview)
3. JEP 494: Module Import Declarations (Second Preview)
4. JEP 495: Simple Source Files and Instance Main Methods (Fourth Preview)

2. Core Libraries & API Enhancements

1.JEP 485: Stream Gatherers
2.Class-File API (JEP-484)
3.JEP 487: Scoped Values (Fourth Preview)
4.Vector API (JEP-489, Ninth Incubator)
5.JEP 499: Structured Concurrency (Fourth Preview)

3. Performance & JVM Improvements

1. JEP 483: Ahead-of-Time Class Loading & Linking — Faster Java Startup
2. JEP 491: Synchronize Virtual Threads without Pinning
3. JEP 450: Compact Object Headers

4. Security & Cryptography

JEP 478, 496, 497: Quantum-Resistant Cryptography

How to enable Preview Features of Java 24?

To compile and run code that contains preview features, we must specify additional command-line options. Since we are going to discuss some of the preview features of Java 24, we need to enable them explicitly during compilation and runtime:

Compilation:

javac --enable-preview --release 24 YourClass.java

Execution:

java --enable-preview YourClass
1. Language Features1. JEP 488: Primitive Types in Patterns, instanceof, and switch (Second Preview)

Till java 22, pattern matching was primarily applied to reference types. Developers could use pattern matching with the ‘instanceof’ operator to simplify type checks and casting for objects.
We can use primitive types (like int, float, double) directly in pattern matching, instanceof, and switch statements, making code more uniform and expressive. It's still in preview state in Java24.

Pros:

  • Eliminates boilerplate code and unsafe casts.
  • Enables concise and type-safe checks and transformations with primitives.

Example:

Object value = 42;
if (value instanceof int i) {
System.out.println("Integer value: " + i);
}
int status = 2;
String message = switch (status) {
case 0 -> "OK";
case 1 -> "Warning";
case 2 -> "Error";
case int i -> "Unknown status: " + i;
};
System.out.println(message);
2. JEP 492: Flexible Constructor Bodies (Third Preview)

It lets us place statements before calling super() or this() in constructors, so you can validate arguments or initialize fields before delegating to another constructor.
First previewed in Java 22 as JEP 447: Statements before super(…) (Preview) and previewed again in Java 23 as JEP 482: Flexible Constructor Bodies (Second Preview).

Pros:

  • Improves code reliability and maintainability.
  • Removes the need for awkward static helper methods or intermediate constructors.

Example:

class Student extends Person {
Student(String name) {
if (name == null || name.isBlank()) {
throw new IllegalArgumentException("Name cannot be empty");
}
super(name);
}
}
3. JEP 494: Module Import Declarations (Second Preview)

It allows developers to import all public top-level classes and interfaces from every package exported by a module using a single line import module syntax

pros:

  • Simplifies modular programming using the import process and reducing boilerplate code.
  • Useful for non-modular codebases that want to leverage modular libraries without a full migration to the module system.

This feature is re-previewed second time in Java24. The key enhancements done are:

Single Import for Modules:
Use import module moduleName; to bring in all public classes and interfaces from all packages exported by that module (and its transitive dependencies)

Example:

import module java.sql;
public class ModuleImportExample {
public static void main(String[] args) {
System.out.println("Module import works!");
}
}

This is the feature which enables coalescing many imports.
Coalescing the imports is replacing multiple package or on-demand import statements with a single module import declaration.
Example

Instead of writing:

import javax.xml.*;
import javax.xml.parsers.*;
import javax.xml.stream.*;

You can coalesce these into:

import module java.xml;

Works in Non-Modular Code:

We can use module import declarations in any Java source file, even if our code isn’t modularized. It doesn't change any project structure.
You can use the new import module ...; feature in your regular Java programs, even if you haven’t set up modules or don’t have a module-info.java file.

Example: we can write this in any Java file:

import module java.base;
public class Example {
public static void main(String[] args) {
List<String> list = new ArrayList<>();
System.out.println(list);
}
}
  • Here, List and ArrayList work without extra import lines, and you don’t need to make your project a module.

Importing All from a Module

import module java.base;

public class BaseModuleExample {
public static void main(String[] args) {
List<String> list = new ArrayList<>();
System.out.println("List created: " + list);
}
}
  • No need for import java.util.List; or import java.util.ArrayList;-all public classes from java.base (like java.util.*, java.io.*) are available

Transitive Imports:
Importing a module also imports all packages exported by its required modules. For example, import module java.se; brings in the entire Java SE API, including everything from java.base

import module java.sql;

public class SqlModuleExample {
public static void main(String[] args) throws SQLException {
Connection conn = DriverManager.getConnection("jdbc:your_database_url");
System.out.println("Connection established: " + conn);
}
}
  • Connection and DriverManager are available because java.sql exports them and transitively exports javax.sql

Shadowing and Ambiguities:
In Java 24, type-import-on-demand declarations (like import java.util.*;) and single-type imports take precedence over module imports, helping resolve naming conflicts.

If two modules export classes with the same name (e.g., Date in java.util and java.sql):

import module java.base;
import module java.sql;
import java.sql.Date; // Shadows Date from java.util
public class AmbiguityExample {
public static void main(String[] args) {
Date date = new Date(0);
System.out.println("Date: " + date);
}
}
Its a good practice to keep grouping the Imports
// Module imports
import module java.base;
import module java.sql;
// Package imports
import java.util.*;
import javax.swing.text.*;
// Single-type imports
import java.sql.Date;
public class Example { ... }
  • Order reflects specificity: module imports < on-demand imports < single-type imports.
4. JEP 495: Simple Source Files and Instance Main Methods (Fourth Preview)

Allows Java programs to be written without explicit class declarations and supports instancemainmethods.
The main method can now be an instance method (not static, not public, no need for String[] args)

Pros:

  • Reduces boilerplate code.
  • Makes it easier for beginners to start with Java, and for experienced developers to write quick, small programs.

Example:

void main() {
System.out.println("Hello, World!");
}

Run with:

java --enable-preview --source 24 HelloWorld.java
2. Core Libraries & API Enhancements1. JEP 485: Stream Gatherers

Before Java 24, the Java Stream API only allowed a limited set of built-in intermediate operations (map, filter, flatMap, etc.). If you needed more complex, stateful, or multi-element transformations-like batching elements into groups, sliding windows, or custom folding-you had to write cumbersome code outside the stream pipeline, use collectors (which are terminal, not intermediate), or implement complex custom Spliterators. These approaches were often hard to read, maintain, and did not preserve the laziness or composability of streams.

Example: Batching (Fixed Window) Operation
Suppose you wanted to split a list of students into groups of three using streams.

Before Java 24:
You had to write manual logic outside the stream, breaking the fluent pipeline and making the code less readable:

List<String> names = List.of("Alice", "Bob", "Charlie", "David", "Eve", "Frank");
List<List<String>> batches = new ArrayList<>();
for (int i = 0; i < names.size(); i += 3) {
batches.add(names.subList(i, Math.min(i + 3, names.size())));
}
batches.forEach(System.out::println);
  • The Problem associated in the above program:
  • Not a real stream operation (must collect to a list first).
  • Breaks stream laziness and composability.
  • Harder to read and maintain
  • Why useful: Lets you create custom intermediate operations for streams (like windowing, grouping, batching) easily, making data processing more powerful and readable.

How Java 24 Solves This

With Java 24’s Stream Gatherers, you can create custom intermediate operations directly in the stream pipeline. For batching, you can use the built-in Gatherers.windowFixed:

import java.util.stream.Gatherers;

List<String> names = List.of("Alice", "Bob", "Charlie", "David", "Eve", "Frank");

names.stream()
.gather(Gatherers.windowFixed(3)) // Group elements in batches of 3
.forEach(System.out::println);

Benefits:

  • Clean, readable, and fully inside the stream pipeline.
  • Preserves laziness and composability.
  • No need for manual index calculations or breaking the stream flow
2. Class-File API (JEP-484)

The Class-File API (JEP 484) is a new standard Java API introduced to provide a unified, reliable, and modern way to parse, generate, and transform Java class files. Prior to this API, developers had to rely on third-party libraries like ASM, ProGuardCORE, or ByteBuddy, which often lagged behind new Java class file features and introduced compatibility and maintenance challenges.

  • The API was first previewed as JEP 457 in JDK 22, refined as JEP 466 in JDK 23, and is finalized in JDK 24 with minor adjustments based on feedback and experience.
  • It replaces the need for the JDK to bundle third-party libraries like ASM internally, enabling faster adoption of new JVM class file features and reducing ecosystem fragmentation.
  • The API is designed to evolve in sync with the Java Virtual Machine Specification and the class file format itself.

Some of the key features include

  • Provides a type-safe, immutable, and modern Java idiomatic interface for working with class files, avoiding older patterns like the visitor pattern used in ASM.
  • Supports parsing existing class files, generating new class files programmatically, and transforming class files (e.g., for bytecode manipulation or optimization).
  • Focuses strictly on parsing, generating, and transforming class files; it does not include code analysis features.
  • Located in the jdk.classfile package in Java 24.

The example code demonstrates how to use the Class-File API in Java 24 to read a .class file and print its class name and method names:

import java.nio.file.Files;
import java.nio.file.Path;
import jdk.classfile.ClassFile;
import jdk.classfile.Method;
public class EnhancedClassFileAPIExample {
public static void main(String[] args) throws Exception {
Path path = Path.of("Example.class"); // Path to the class file
byte[] classBytes = Files.readAllBytes(path); // Read class file bytes
ClassFile classFile = ClassFile.read(classBytes); // Parse bytes into ClassFile object
System.out.println("Class Name: " + classFile.thisClass()); // Print class name
for (Method method : classFile.methods()) { // Iterate over methods
System.out.println("Method: " + method.name()); // Print method names
}
}
}
  • ClassFile.read(byte[]) parses the raw bytes of a class file into a ClassFile object representing the structure of the class.
  • classFile.thisClass() returns the name of the class.
  • classFile.methods() provides a list of method objects, each of which can provide its name and other metadata.
  • This example shows how straightforward it is to inspect class files without external libraries.
Pros:
  • Simplifies development of tools and frameworks that work with Java bytecode by providing a standard, well-maintained API.
  • Ensures compatibility with the latest Java class file features immediately upon JDK release.
  • Reduces dependency on external libraries, lowering maintenance overhead and improving ecosystem consistency
3. JEP 487: Scoped Values (Fourth Preview)

Java introduced scoped values as a preview feature to provide a better alternative to thread-local variables for sharing immutable data within a thread and its child threads. This is especially useful with virtual threads and structured concurrency.

  • Java 23 (Third Preview):
    Scoped values included methods like callWhere and runWhere in the ScopedValue class to bind values temporarily. While functional, these methods made the API more complex and less fluent.
  • Java 24 (Fourth Preview):
    The API was refined by removing callWhere and runWhere. Now, scoped values are bound and accessed exclusively through the ScopedValue.Carrier.call and ScopedValue.Carrier.run methods. This change simplifies the API, making it more intuitive and consistent.

How Scoped Values Work in Java 24

  1. Define a scoped value:
private static final ScopedValue<String> TRANSACTION_ID = ScopedValue.newInstance();

2. Bind a value within a scope using ScopedValue.where which returns a Carrier:

Carrier carrier = ScopedValue.where(TRANSACTION_ID, "txn-456789");

3. Run code within that scope:

carrier.run(() -> {     System.out.println("Processing transaction: " + TRANSACTION_ID.get());     processPayment(); });

Inside the scope, any method can access the scoped value safely via TRANSACTION_ID.get().

example

import java.lang.ScopedValue;
import java.lang.ScopedValue.Carrier;
public class ScopedValueExample {
private static final ScopedValue<String> TRANSACTION_ID = ScopedValue.newInstance();
public static void main(String[] args) {
Carrier carrier = ScopedValue.where(TRANSACTION_ID, "txn-456789");
carrier.run(() -> {
System.out.println("Processing transaction: " + TRANSACTION_ID.get());
processPayment();
});
}
private static void processPayment() {
System.out.println("Completing payment for transaction: " + TRANSACTION_ID.get());
}
}

Pros:

  • Simpler and more fluent: The API is easier to learn and use without redundant methods.
  • Immutable and safe: Scoped values are immutable and only accessible within their bound scope.
  • Better performance: Lower overhead compared to thread-locals, especially with virtual threads.
  • Clear data flow: Scoped values make sharing data explicit and structured.

Java 24’s scoped values API is now cleaner and more fluent by removing older binding methods and relying on the Carrier for scoped execution. This makes sharing immutable data across threads safer, easier, and more efficient-perfect for modern concurrent Java applications.

Try scoped values with your own context data like transaction IDs, request IDs, or session tokens to simplify thread-local data sharing!

  • Provides a safe, efficient way to share immutable data across threads, especially with virtual threads-better than thread-locals for many use cases.
4. Vector API (JEP-489, Ninth Incubator)

The Vector API (JEP 489, Ninth Incubator in Java 24) lets developers write code that performs computations on entire arrays (vectors) of numbers at once, rather than processing one value at a time. This approach, called vectorization, leverages modern CPU instructions (SIMD) to achieve much better performance than traditional scalar loops, especially for numerical and data-processing workloads.

How the Vector API Works
  • Platform Agnostic: The API is designed to work efficiently on any CPU that supports vector instructions (like x64 with SSE/AVX, and ARM AArch64 with NEON/SVE), but will gracefully degrade to regular code if vector instructions aren’t available.
  • Clear and Concise: Developers can write vectorized code using familiar Java constructs, and the API abstracts away the hardware details.
  • Performance: By operating on multiple data elements in parallel, vectorized code can be significantly faster than equivalent scalar code.

Here’s what’s happening in your example:

VectorSpecies<Float> SPECIES = FloatVector.SPECIES_PREFERRED;
float[] a = {1.0f, 2.0f, 3.0f, 4.0f};
float[] b = {5.0f, 6.0f, 7.0f, 8.0f};
float[] c = new float[4];
for (int i = 0; i < a.length; i += SPECIES.length()) {
var va = FloatVector.fromArray(SPECIES, a, i);
var vb = FloatVector.fromArray(SPECIES, b, i);
var vc = va.add(vb);
vc.intoArray(c, i);
}
  • VectorSpecies describes the optimal vector size for the current hardware.
  • The loop processes several elements at a time (the number depends on the CPU’s vector width).
  • FloatVector.fromArray loads a chunk of the array into a vector.
  • .add performs addition on all elements in the vector at once.
  • .intoArray writes the result back to the output array.

This code will use the best available vector instructions on the CPU, making it much faster than a regular loop that adds one element at a time.

5. JEP 499: Structured Concurrency (Fourth Preview)

Structured Concurrency (JEP-499, Fourth Preview) Structured concurrency simplifies concurrent programming by treating groups of related tasks running in different threads as a single unit of work, streamlining error handling and cancellation, improving reliability, and enhancing observability.
some of the key features:

  1. Atomic Unit of Work:
    Both API calls (fetchWeather and fetchLatestNews) are treated as a single logical operation. If either fails, both are automatically canceled.
  2. Automatic Cleanup:
    The try-with-resources block ensures all threads are cleaned up when the scope exits.
  3. Coordinated Error Handling:
    throwIfFailed() throws the first exception encountered, preventing partial results from being used if any subtask fails.
  4. Result Aggregation:
    Results are safely combined after all tasks have completed successfully.
Some of the use cases are:
  • Aggregating data from multiple microservices.
  • Parallel database queries.
  • Concurrent file/network operations.
  • Bulk processing with failure rollback.
import java.util.concurrent.Future;
import java.util.concurrent.StructuredTaskScope;

public class StructuredConcurrencyDemo {
public static void main(String[] args) throws Exception {
String userId = "alice123";
UserDashboard dashboard = loadUserDashboard(userId);
System.out.println(dashboard);
}

static UserDashboard loadUserDashboard(String userId) throws Exception {
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Future<UserProfile> profileTask = scope.fork(() -> fetchUserProfile(userId));
Future<Notifications> notificationsTask = scope.fork(() -> fetchNotifications(userId));

scope.join();
scope.throwIfFailed();

return new UserDashboard(
profileTask.resultNow(),
notificationsTask.resultNow()
);
}
}

// Simulated data fetchers
static UserProfile fetchUserProfile(String userId) {
return new UserProfile(userId, "Alice", "alice@example.com");
}

static Notifications fetchNotifications(String userId) {
return new Notifications(new String[] {
"Welcome back, Alice!",
"You have 3 new messages."
});
}

// Simple record classes for demonstration
record UserProfile(String id, String name, String email) {}
record Notifications(String[] messages) {}
record UserDashboard(UserProfile profile, Notifications notifications) {}
}
3. Performance & JVM Improvements1. JEP 483: Ahead-of-Time Class Loading & Linking — Faster Java Startup

Java apps often start slowly because the JVM must load, verify, and link thousands of classes every time the app runs. JEP 483 speeds this up by caching loaded and linked classes after a special training run, so subsequent startups are much faster.

How It Works

  1. Training Run: Run your app once with JVM options that record which classes are loaded and linked.
  2. Cache Creation: Generate an AOT cache file containing these preloaded classes.
  3. Faster Startup: Run your app using the cache, skipping class loading and linking steps.
public class Hello {
public static void main(String[] args) {
System.out.println("Hello, Java 24!");
}
}

For a simple CLI app:

# Record classes during a training run
java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf -cp Hello.jar Hello
# Create the AOT cache
java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf -XX:AOTCache=app.aot -cp Hello.jar
# Run with cached classes for faster startup
java -XX:AOTCache=app.aot -cp Hello.jar Hello
  • Startup can be up to 40% faster.
  • Ideal for short-lived apps, microservices, and serverless functions.
  • No code changes needed, just a training run and JVM options.

Cons:

  • Requires a training run and a consistent classpath.
  • Cache files can be large.
  • Not compatible with custom class loaders or some JVM features like ZGC.
2. JEP 491: Synchronize Virtual Threads without Pinning

JEP 491 removes a major bottleneck for virtual threads by allowing them to enter synchronized blocks or methods without “pinning” the underlying carrier (platform) thread. Previously, if a virtual thread blocked inside a synchronized section (e.g., waiting to acquire a lock or calling wait()), it would pin its carrier thread, reducing scalability. Now, the carrier thread is released and can run other virtual threads, making high-concurrency apps more efficient.

  • Lets you use synchronized safely with virtual threads, even in legacy code.
  • Greatly improves scalability and throughput for applications using thousands or millions of virtual threads.
  • No need to refactor existing synchronized code to use concurrent locks for most cases.
synchronized(lock) {
// critical section
processRequest();
}

With JEP 491, if a virtual thread blocks in this synchronized block (e.g., on wait() or lock contention), it is unmounted from its carrier thread, freeing the carrier to run other virtual threads. This means you can handle massive concurrency without running out of platform threads.

Cons:

  • Some rare cases (like blocking in native code) can still cause pinning.
  • For maximum scalability, avoid heavy blocking or I/O inside synchronized blocks.
3. JEP 450: Compact Object Headers

Reduces the size of Java object headers on 64-bit JVMs from 12–16 bytes down to just 8 bytes. This saves memory for every object, making a big difference in apps with millions of objects.

  • Cuts heap usage by 10–20% in real-world workloads.
  • Lets more objects fit in memory and CPU cache, which can speed up apps and reduce garbage collection pauses.
  • Especially valuable for data-heavy apps (collections, caches, analytics, etc.).
    Suppose your app creates millions of objects:
List<Object> list = new ArrayList<>();
for (int i = 0; i < 1_000_000; i++) {
list.add(new Object());
}

With compact headers, each object uses 8 bytes less memory. For a million objects, that’s about 8 MB saved-and much more in larger systems.

Cons:

  • Still experimental in Java 24 (must be enabled with JVM flags).
  • Not yet compatible with all garbage collectors (e.g., ZGC).
  • May cause up to 5% performance overhead in rare cases.
  • Reduces memory usage for applications with many objects, improving efficiency at scale.
4. Security & Cryptography

JEP 478, 496, 497: Quantum-Resistant Cryptography

Future-proof applications against quantum computing threats by providing new cryptographic primitives and APIs.

Some of the detailed information can be found here.

Thank you for reading this article. Please provide your valuable suggestions/ feedback.

  • Clap and Share if you liked the content.
  • 📰 Read more content on my Medium (on Java Developer interview questions)
  • 🔔 Follow me on: LinkedIn

Please find my other useful articles on Java Developer interview questions

Following are some of the famously asked Java8 Interview Questions

Frequently asked Java Programs

Dear Readers, these are the commonly asked java programs to check your ability on writing the logic

SpringBoot Interview Questions | Medium

Rest and Microservices Interview Questions| Medium

Spring Boot tutorial | Medium

Java24 Important Features. was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Code Like a Girl

The Rise and Fall of Agile: What happens Next?

For years, Agile has been the gold standard for software development and project management. Teams across industries embraced its…

Continue reading on Code Like A Girl »


James Davis Nicoll

Amusing Ourselves To Death / Fahrenheit 451 By Ray Bradbury

Ray Bradbury’s Fahrenheit 451 is a science fiction novel.

Fireman Guy Montag and his wife Millie live comfortable, middle-class lives. Millie finds purpose in an endless stream of television entertainment. Guy burns books.



Cordial Catholic, K Albert Little

Did the apostles appoint successors? #popeleo #biblestudy #catholic #christian

-/-

Kitchener Panthers

Panthers drop road contest in Chatham

PHOTO: Dan Congdon

============

CHATHAM - A rough seventh inning led to Kitchener's downfall.

The Barnstormers scored six runs in the seventh, including a pair of bases loaded walks, to defeat the Panthers 9-1 Saturday night in Chatham.

Yadian Martinez struck out six in five innings of work, and was tagged with the loss.

Mitsuki Fukuda hit a three-run home run in the first to push the home side to an early lead, but Kitchener hung in the game and got an RBI single from Yosvani Penalver to make it 3-1 in the sixth.

Penalver had two hits and a walk to lead the attack.

Trent Lawson had a pair of hits, while Yordan Manduley, Nick Parsons, Nico KyRose and AJ Karosas each had a hit.

Brock Whitson got the win for Chatham-Kent, striking out two in six innings.

Kitchener falls to 1-1, while Chatham gets the win in its first game of the year.

The two teams meet again in Kitchener Sunday afternoon at 2 p.m.

GET YOUR TICKETS NOW and #PackTheJack for our home opener! First 500 fans in the gate get a commemorative t-shirt!

GAMESHEET



Elmira Advocate

MY PATIENCE IS WEARING THIN FOR ***HOLES WHO THINK THAT THEIR TIME IS SO MUCH MORE IMPORTANT THAN MINE

 


One example is people who initiate an e-mail conversation and then "Ghost" you. This includes currently both the Waterloo Region Record and Sebastian. The Record reporter approached me via e-mail asking for assistance in regards to Agent Orange production at the Uniroyal Chemical plant in Elmira. Then he disappears for eight days until his story "50 YEARS after the Vietnam War" is published. Even then it's only been a couple of e-mails saying he'll "correct" his story and he'll get back to me for a conversation. I'm still waiting!

Sebastian on the other hand NEVER answers his damn phone nor does he return phone calls. E-mails sent to him are routinely neither confirmed as received or heaven forbid responded to.  Only when he initiates, via e-mail, usually asking for technical assistance do I get lucky and have him actually respond, sometimes.  I've been fed up with him before and I'm getting there again. An unreliable environmental colleague Sebastian isn't worth much more than zero. I've never told anyone else this but if you don't start behaving soon then I'll be done with you again.

I expect no response from proven liars, sycophants and fellow travellors.  This includes Woolwich Township, council, GHD, Lanxess, MECP, TRAC and many more.  



Cordial Catholic, K Albert Little

The Purpose, Importance, and FUTURE of the Liturgy (w/ Chris Carstens)

-/-

Inksmit

Supporting Local: What the BOBI Act Means to Ontario School Districts

Educators have a wide range of options when it comes to coding and robotics kits, workforce skills technology, and MakerSpace tools. The Building Ontario Business Initiative (BOBI) Act mandates public-sector organizations, like school boards and universities, to give preference to Ontario businesses. Other provinces and territories have given similar guidance in recent months. 

As the only Canadian STEM EdTech company that designs and manufactures computer science hardware and creates all instructional content right here in Ontario, we know how much local matters.

When school boards choose local, they’re not just meeting policy requirements—they’re investing in their communities and helping students develop the skills that matter most right here in Ontario.

Local also means relevant: content and training tailored to Ontario’s curriculum, career pathways, and workforce priorities. We’re here to ensure that choosing local never means compromising on impact.

What is the Building Ontario Business Initiative (BOBI) Act?
  • The BOBI Act requires public sector organizations to give preference to Ontario businesses for procurements under $121,200 CAD.

  • For non-competitive procurements under $25,000, purchases should be made from Ontario businesses 

  • For competitive procurements, Ontario businesses must be invited

The Act came into effect on April 1st, 2024. It aims to stimulate local economic growth, enhance supply chain resilience, and support job creation within the province. Ready more here >>

And even if you're outside Ontario, choosing to buy Canadian matters.

Supporting Canadian-made education products helps keep innovation, quality, and economic growth rooted in our communities. Whether you’re a teacher in British Columbia or a district leader in Nova Scotia, every Canadian purchase helps strengthen the network of companies committed to serving Canadian classrooms with purpose-built tools that reflect our shared values.

At InkSmith, we strive to be a trusted partner to schools and districts. That means understanding the realities of the classroom and using design-thinking principles to ideate, prototype, and manufacture tools that help educators hit their goals while students build durable, future-ready skills — right here in Ontario.

When you’re exploring new EdTech solutions, we'd love to connect and learn more about your needs.

Thank you for choosing to support local.


The InkSmith Team


Greater Kitchener Waterloo Chamber of Comerce

Ontario Corporate Training Centre (OCTC): Why Mental Health and well-being matters more than ever in Ontario Workplaces

Burnout, Belonging, and the Bottom Line: Why Mental Health and Well-Being Matters More Than Ever in Ontario Workplaces

In recent conversations with businesses across Kitchener-Waterloo, a common thread has emerged: teams are tired. While every workplace looks different, the undercurrent of burnout, stress, and emotional strain seems to be everywhere, from factory floors to office boardrooms.

Mental health and well-being in the workplace isn’t a new conversation. But in 2025, it’s no longer a conversation we can afford to postpone.

The Numbers Behind the Challenge

More than 5 million Canadians aged 15 and older met the diagnostic criteria for a mood, anxiety, or substance use disorder in the span of just one year, according to Statistics Canada. As the Ontario Chamber of Commerce notes in its 2025 Economic Report, the province is experiencing a mental health “echo pandemic”— with lasting impacts on people, workplaces, and communities.

While 71% of Ontario businesses recognize that employee mental health and well-being are key to their organization’s success, only 41% currently have a formal strategy in place. Among small businesses, that number drops to just 32%.

Add in limited access to primary care and rising reports of opioid-related harm, and it’s no surprise that employers are feeling overwhelmed.

But here’s the opportunity: workplace well-being doesn’t have to start with a formal strategy. It can start with a conversation.

A 5-minute Practice That Can Make a Difference

At OCTC, our team begins every meeting with a check-in. The first five to ten minutes are set aside to ask: How are you? What’s on your mind? We ask and we listen. This simple practice has helped foster a supportive, psychologically safe team culture despite working fully remote.

Small changes like this can make a big difference. When employees feel safe to share what they need — whether it’s time, space, or accommodation — they’re more likely to stay engaged.

Practical Tools to Support You and Your Team

If you’re looking for practical and free resources to get started or improve your approach, these tools offer strong foundations:

1. Your Health Space (CMHA Ontario)

A free mental health program for Ontario’s healthcare sector offering live workshops, e-learning, and tools to address workplace stress and psychological safety.

2. Workplace Mental Health Playbook for Business Leaders (CAMH)

A clear, actionable guide with five recommendations for business leaders committed to supporting employee well-being.

3. Mental Health Toolkit (YES WorkAbility Project)

Tailored to help employers create accessible workplaces for employees who have a mental health disability. Includes interactive courses and training modules.

You can also explore the Greater Kitchener-Waterloo Chamber’s Mental Health Resource Hub for additional supports and local services.

A Final Thought

Supporting your team’s mental health doesn’t require a major overhaul. It starts with listening, being open, and with creating space for people to be human—especially when the pressures of work and life feel heavy.

This month, as we observe Mental Health Awareness Month and prepare for National Accessibility Week (May 26–June 1), we invite all business leaders to take action. A check-in, a new training or a conversation that helps strengthen their workplace.

The Ontario Corporate Training Centre (OCTC) is here to help businesses bridge the gap. Through free disability awareness and confidence training, along with support connecting to local employment service providers, OCTC offers practical, easy-to-implement solutions to help teams thrive.

The post Ontario Corporate Training Centre (OCTC): Why Mental Health and well-being matters more than ever in Ontario Workplaces appeared first on Greater KW Chamber of Commerce.


Cordial Catholic, K Albert Little

Eucharistic Miracles, the Soul, Near-Death Experiences – Science CAN'T Explain! (w/ Fr. Spitzer)

-/-

Brickhouse Guitars

April's Promise by Roger Schmidt Featuring The Boucher LE-HG-246-M

-/-

Cordial Catholic, K Albert Little

Why the CATHOLIC CHURCH Holds to Scripture AND Tradition (w/ Matthew Becklo)

-/-

Elmira Advocate

WATERLOO REGION WINS "CODE OF SILENCE" AWARD RE: WILMOT TWN. LAND ASSEMBLY

 


The Region of Waterloo could also win the same award for their lack of response to the ongoing decades of gamesmanship up here in Elmira, Ontario. Today's K-W Record advises us about the ongoing efforts at land assembly in Wilmot Township mostly against the wishes of local citizens. Reporter Terry Pender also wrote a separate article that described the "Code of Silence" award mentioned in the title above.

Yes this is the same reporter who wrote the article titled "50 YEARS after the Vietnam War" published on May 1 ,2025.  Yes it is the same reporter whom I've launched a complaint against for his failures to follow through with me in regards to a couple of errors in that May 1 article. Now all this being said I am forced to reconsider from Mr. Pender's position. His two articles in today's Record are both excellent and well written. They are on pages A1 and A2 and well worth reading. What I am reconsidering is Mr. Pender's workload which of course normally none of us would do. In this case I can understand that the hot, local topic is the 770 acre Wilmot land assembly not the fifty year old Vietnam War nor the thirty-six year old Elmira Water Crisis.

Is it possible that Mr. Pender's employer and supervisors have him a little overstretched at the moment? Is it possible that our print media who by all accounts are in desperate financial/readership straits are trying to squeeze more and more out of their employees including reporters? Would it be very understandable if they were? While none of this relieves the Record and its' reporters of transparency and accountability nevertheless these factors could explain the delays in their response and correction of the errors in their article. 

I am not at this time remotely interested in punitive actions against either the reporter or the Record. I can understand if they are both between a rock and a hard place and it is affecting their response time. That said as I advised  Mr. Pender two days ago, I am in my 76th year.  Thirty-six of them have been dealing with a bunch of twits/politicians and worse here in Elmira regarding Uniroyal Chemical. I have long ago burned through whatever patience I used to have dealing with those professional deceivers. Maybe a tiny bit more patience from my end combined with at least a promise of a date for both a phone call and a correction in the Record would be very helpful.  




Code Like a Girl

Learn From Canaries in the Coal Mine, and Other Actions for Allies

Each week, Karen Catlin shares five simple actions to create a more inclusive merit-based workplace and be a better ally.♦1. Learn from canaries in the coal mine

In The Canary Code: A Guide to Neurodiversity, Dignity, and Intersectional Belonging at Work, Ludmila N. Praslova, PhD uses the metaphor of canaries in coal mines to highlight how marginalized individuals often notice harm in workplace systems before anyone else does. She writes:

“Organizational problems like the lack of fairness, bullying, and toxic cultures impact people with more intense senses and nervous systems before affecting others. Sensitive does not mean broken: it means processing the experience more fully, and intensely, just like birds process the air — the oxygen and the pollutants — more fully.”

In other words, when we listen to canaries, we hear early warnings about toxic systems and gain the opportunity to build workplaces where everyone can thrive.

So, when a colleague raises a concern — about bias in a process, exclusion in a meeting, or barriers in workplace culture — let’s listen to them and believe them. Without downplaying what they shared. Without dismissing their concern because we haven’t personally experienced it. Without trying to help them see it differently. Without saying, “I’m sure they didn’t mean to offend you.”

Let’s also ask ourselves, “What can we learn from their perspective?” Then, advocate for change and help address the systemic issue.

Share this action on Bluesky, LinkedIn, Instagram, or Threads.

2. Notice how names are used — and what they signal

A newsletter subscriber recently asked, “I find it disconcerting when people (almost always men) refer to other people (always men) by their last name. What are your thoughts, and what might be an appropriate response?”

Great question.

Using last names among men — especially in professional or male-dominated spaces — can be a subtle way of signaling in-group belonging. It can create a sense of camaraderie. But when this pattern isn’t extended to women, it sends another message: They’re not part of the inner circle.

When women are consistently referred to by their first names, while men get the “last-name treatment,” it can feel minimizing or exclusionary. Over time, those small signals can add up, chipping away at a sense of belonging.

Pay attention to naming patterns in your workplace. Are they consistent across genders and other identities? If not, disrupt the norm. Use inclusive language yourself, and model a naming convention that doesn’t reinforce “us vs. them” dynamics.

A simple “Hey, let’s stick with first names for everyone” can go a long way.

3. Understand the legality of diverse pipeline strategies

The Mansfield Rule is an inclusion strategy, primarily for law firms, that requires them to consider at least 30% of qualified underrepresented candidates for leadership roles and other key opportunities. It was inspired by a hackathon to improve gender balance in law firm leadership. Diversity Lab, the sponsor of that hackathon, now runs the Mansfield Rule certification program.

It’s important to note that the Mansfield Rule is not a quota system. It does not lead to the exclusion of any individual from consideration based on their race, gender, or any other aspect of their identity.

Despite its positive results (or perhaps because of them), the Mansfield Rule has faced criticism.

Fortunately, a recent U.S. Federal Court decision has upheld the legality of the Mansfield Rule. In a case involving President Trump’s Executive Order targeting law firm Perkins Coie, the Court affirmed that the rule is lawful and aligns with anti-discriminatory laws. Page 58 of the Court’s opinion states,

“The Mansfield Rule expressly does not establish any hiring quotas or other illegally discriminatory practices, requiring only that participating law firms consider attorneys from diverse backgrounds for certain positions.”

It’s helpful guidance for anyone who uses the Mansfield Rule or a similar strategy to diversify the hiring and promotion pipeline.

4. Lobby for office accommodations

Given that May is Mental Health Awareness Month in many countries, here is one way to be an ally: Lobby for and support office accommodations that can help ease and prevent mental health challenges.

In the Harvard Business Review’s How to Be a Mental Health Ally, advocate and author Katherine Ponte wrote:

“Allies and leaders should lobby for and support office accommodations that can benefit all employees by helping prevent mental health challenges and mitigating workplace stressors that can worsen mental health. Some easy and low-cost examples of accommodations from the American Disabilities Act include offering late starts (many psychiatric medications can be sedating), breaks to attend medical appointments, flextime, quiet workspaces, office psychiatric service dogs (or emotional support animals), remote work, and part-time work. Encourage all employees to discuss accommodations for their team and suggestions for how best to incorporate them.”

Ponte’s article also contains suggestions for talking to a colleague who may be struggling with their mental health. She provides some helpful examples of what to say and not say. Be sure to check it out.

5. Community Spotlight: Keep using the mic

This week’s spotlight on an ally action from the Better Allies community is from newsletter subscriber Sarah Rabeda, who wrote,

“When I joined the office two years ago, no one ran the mic for questions, nor did the speakers repeat the questions before answering. I started raising my hand and, instead of asking a question, requested the mic running. It took a few meetings of raising my hand and requesting, but now it’s part of our meeting culture here.”

Rabeda added,

“Recently, we had a higher-up visit our office to share strategy with a room of about 80 people. He began using the mic and, a few minutes later, said, ‘I don’t need the mic; I’m not going to use it,’ to which I immediately said loudly, ‘Please keep using the mic.’ After the meeting, I thanked him for continuing to use the mic. I explained that it’s for accessibility, as I and likely others could not hear him from the back. He graciously thanked me and explained that mics make him forget to breathe, so that is something he will work on as a leader.”

If you’ve taken a step towards being a better ally, please reply to this email and tell me about it. And mention if I can quote you by name or credit you anonymously in an upcoming newsletter.

That’s all for this week. I wish you strength and safety as we all move forward.

Karen Catlin (she/her), Author of the Better Allies® book series
pronounced KAIR-en KAT-lin, click to hear my name

Copyright © 2025 Karen Catlin. All rights reserved.

Being an ally is a journey. Want to join us?

  • Follow @BetterAllies on Bluesky, Instagram, Medium, Threads, or YouTube. Or follow Karen Catlin on LinkedIn
  • This content originally appeared in our newsletter. Subscribe to “5 Ally Actions” to get it delivered to your inbox every Friday
  • Read the Better Allies books
  • Form a Better Allies book club
  • Get your Better Allies gear
  • Tell someone about these resources

Together, we can — and will — make a difference with the Better Allies® approach.

♦♦

Learn From Canaries in the Coal Mine, and Other Actions for Allies was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


House of Friendship

New Job, Same Values

Dawn Gill

Dawn Gill’s recent career change might sound like a sharp departure from her long-time role as House of Friendship’s Development Officer.

Certainly, her current position as Constituency Coordinator for her Member of Parliament invites a lot of questions about the ins and outs of political life. But she simply sees it as another way to remain focused on tackling social inequality, just from a different position.

Dawn firmly believes that the only way to build strong communities is for each of us to jump in and take action, starting with the simplest acts: “Whether that means donating, volunteering, or being an authentic ambassador in the community … all those things add up,” she says.

When I talk about the advocacy work I do, people ask me what the world of politics is like. I don’t think of it as politics. It aligns with my values.

That grassroots thinking led Dawn and her husband Rodney to explore the idea of leaving a gift in their Will to House of Friendship.

They discovered how accessible legacy giving is. “To borrow a catchphrase,” Dawn says, “we realized we’re richer than we think! We decided rather than wait for the ‘wealthy’ to take action, how about a whole bunch of folks just each do a little bit?”

Leaving a Gift in her Will allows Dawn to continue to strengthen her community and keep her values alive. It’s a simple act of kindness that keeps adding up – for years to come.

“I think, if you want to say I love you – say it now. If you want to give a gift – do it now.”

If you would like to explore the idea of leaving a gift in your Will to House of Friendship, contact Development Manager Joanne Adair at joannea@houseoffriendship.org, or visit www.houseoffriendship.org/wills.

The post New Job, Same Values appeared first on House Of Friendship.


James Davis Nicoll

Right From Wrong / A Quiet Teacher (Quiet Teacher, volume 1) By Adam Oyebanji

2022’s A Quiet Teacher is the first of Adam Oyebanji’s Quiet Teacher mystery series.

To the students and staff at Pittsburgh’s prestigious Calderhill Academy, Greg Abimbola is a language teacher whose good looks are made piratical by his eyepatch. Staff and students might be surprised to learn that Greg Abimbola is not the teacher’s real name and that his skill set extends far beyond teaching spoiled children Russian.

The Backing Bookworm

The Girls of Good Fortune


This historical fiction story is an enlightening and entertaining story for readers. Set in 1880's Oregon the story centres around Celia Hart, a biracial young woman - half white, half Chinese - who passes as white in a society where racism and mistreatment of minorities, particularly of Asian descent, is rampant and often deadly.
Told in two timelines, the first shows Celia in a frightening situation, leaving readers to wonder how she got there. The second timeline takes readers to the beginning of Celia's story when she was a servant in the home of the mayor, until she became pregnant and is sent to work as a cleaner in a brothel. There Celia finds her own 'found family' and raises her child until her past influences the present and sends Celia on a quest to stand up for the rights of Chinese workers.
This book has a decidedly different feel in the first half compared to the second half and it was jarring, to be honest. The first half felt like a historical fiction novel with great insight and atmosphere (reminiscent of Janie Chang and Kate Quinn's The Phoenix Crown). The second half relies on Celia's naive and impulsive decisions (which were at times frustrating) that send her on a dangerous quest filled with kidnapping, ships, train jumping and more that required me to suspend disbelief.
This is a well-researched story that goes a bit off the traditional historical fiction rails. The first half is atmospheric as it teaches readers about racism, long-forgotten massacres against Chinese Americans and the limited choices of women, while the second half is more of an entertaining madcap adventure. Personally, I preferred the first half and found the second half was too over-the-top, but the ending will appeal to readers who like their stories tied up nice and neatly.

Disclaimer: Thanks to Sourcebooks Landmark for the complimentary digital copy of this book which was given in exchange for my honest review.

My Rating: 3 starsAuthor: Kristina McMorrisGenre: Historical Fiction, BIPOCType and Source: ebook from publisher via NetGalleyPublisher: Sourcebooks LandmarkFirst Published: May 20, 2025Read: May 13-15, 2025

Book Description from GoodReads: The New York Times bestselling author of Sold on a Monday and The Ways We Hide shines a light on shocking events surrounding Portland's dark history in this gripping novel of love, lore, and betrayal. 
She came from a lineage known for good fortune…by those who don't know the whole story. 

Oregon, 1888. Amid the subterranean labyrinth of Portland's notorious Shanghai Tunnels, a woman awakens in an underground cell, drugged and disguised. Celia soon realizes she's a "shanghaied" victim on the verge of being shipped off as forced labor, leaving behind those she loves most. Although well accustomed to adapting for survival—being half-Chinese, passing as white during an era fraught with anti-Chinese sentiment—she fears that far more than her own fate hangs in the balance.

As she pieces together the twisting path that led to her abduction, from serving as a maid for the family of a dubious mayor to becoming entwined in the case of a goldminers' massacre, revelations emerge of a child left in peril. Desperate, Celia must find a way to escape and return to a place where unearthed secrets can prove deadlier than the dark recesses of Chinatown.

A captivating tale of resilience and hope, The Girls of Good Fortune explores the complexity of family and identity, the importance of stories that echo through generations, and the power of strength found beneath the surface.


Code Like a Girl

Microsoft Azure AZ-900 Certification: My Personal Journey

CERTIFICATIONMy experience taking the exam and what I plan to do next.♦The Microsoft Azure Cloud certification program.Reinventing a career

I recently passed the Microsoft Azure Fundamentals AZ-900 certification exam, and it provided a valuable refresh to my background. But was it worth it?

I have experience as a senior software developer, proficient in multiple programming languages and industries. However, after years of working in the same role or company, it was becoming far too comfortable to slip into stagnation.

I decided to push myself into a new learning experience and obtain a Microsoft Azure certification, which not only taught me a new world of cloud technology, but also answered the question - are certs worth the effort?

Preparing for the AZ-900 certification

My initial preparation for the Microsoft Azure Fundamentals AZ-900 certification included several first steps.

I began by reading through the Microsoft Learn material which includes all required topics that the exam covers. These include describing cloud concepts, Azure architecture and services, and Azure management and governance.

I initially had very little experience in Microsoft Azure cloud technology prior to taking this certification exam. This made the process a new (and also, exciting!) experience.

Online learning versus textbooks

The Microsoft Learn material is a great way to dip your toes into a certification, although the content is still on the light side.

I wanted to go more in depth into the learning process and understand more detail about all of the cloud concepts included in the preparatory material. While reading through Microsoft Learn, I also began reading through the book Microsoft Azure Fundamentals Exam Ref AZ-900 by Jim Cheshire.

I find that reading a full-fledged technology book, in addition to online material, allows me to dive deep into a technology to learn all aspects and concepts.

Moving to audiobooks

The textbook provided a great range of discussion and detail on the required topics of the exam.

I particularly found the case study examples, which focus on real-life business scenarios of implementing cloud solutions with Azure, to be particularly useful.

Of course, reading through a textbook can be a challenging and time-consuming process. By leveraging the iPhone spoken content feature, I was able to listen to the audio version of the book while taking walks. I find this to be a great way to learn while moving.

Gaining confidence

Following the completion of the book and online material, I used several online practice exams to get a feel for the types of questions that the certification would ask.

I used exams and study guides from several sources.

  • Microsoft Learn
  • WhizLabs
  • John Savill’s AZ-900 Study Cram YouTube
  • Coursera
  • Udemy

I found the combination of practice exams (both online and YouTube), along with the excellent Study Cram series by John Savill to be a great way to get ready for the exam.

My experience taking the exam

The certification exam is proctored by Pearson Vue and can be taken either one of their offices or remote.

I decided to take the exam remote for convenience. However, I ran into technical issues with the computer requirements for using the provided exam software. Specifically, the software requires complex and rigid control of your computer in order to prevent cheating.

The Pearson Vue test software takes control of your PC, closing all applications and widows on the computer, including my video cam driver software!

Rushing to the store to buy a webcam

The proctored exam requires that you have both a video camera and a microphone enabled at all times during the test process.

Since the Pearson Vue software forcefully closed my webcam driver software, I ended up having to buy a new webcam on the morning of the exam. I left to purchase one, first thing in the morning, in order to ensure it would be permitted with the exam software. Luckily, I found one on sale at clearance!

The combo webcam and microphone costs less than $30. While the video and audio quality were quite poor (compared to an iPhone recording quality), it successfully passed the validation checks.

Maintaining a stable Internet connection

With the webcam issue out of the way, there was another technical requirement that I was concerned about — my Internet connection!

Pearson Vue has a requirement for maintaining a stable internet connection during the exam. If your connection drops, it may be considered for disqualification from the exam (depending on the proctor’s judgement call).

Considering the rigid policies for at-home proctored exams, I would recommend taking the exam at an official Pearson Vue office to avoid this stress.

The waiting room

After initiating the remote proctored exam, I was placed into an online waiting room queue.

While in the waiting room, the proctor goes through required documentation (driver’s license, etc.) to prove identity, as well as verification of your desk and workspace environment.

The entire waiting period took about 15-minutes before I was able to actually take the exam — and I was #4 in the queue.

Waiting is the worst part

In addition to waiting, you have to maintain full visibility in the camera, with both the camera and microphone active during this entire process.

You are not permitted to leave the desk or open any other windows on your PC during this time. This includes bathroom breaks!

I was also asked to sweep the room with the camera to provide a broader view range of the desktop and the surrounding room for the proctor. They are likely using AI-assisted vision recognition software to identify potential anomalies in the room and prevent cheating.

Be sure to have a clean desk and workspace. Also, avoid wearing jewelry or bracelets.

Finishing the exam in under 30 minutes

Following my preparatory work and studying, which covered about 1-month, I found that I was able to quickly complete the exam.

The questions in the AZ-900 exam were straight-forward and relatively short. However, the practice exams and study material made understanding the terminology in the questions even easier and more familiar.

In fact, I found that I was able to complete the exam in under 30 minutes. Even though I had largely finished the exam at this point, I still flagged a handful of questions for review. These were questions where I had to either guess the answer or was simply unsure about.

Generally, in multiple choice tests you are best to go with your first answer when unsure or guessing. Still, I ended up changing my answer on 1–2 questions at final review time, before submitting the test and completing the exam.

My score was shown instantly with a passing result and successful completion of the AZ-900 certification!

Example exam questions and topics

I found some of the exam questions to be near duplicates of ones presented in the Microsoft Learn practice exam and WhizLabs guides.

The easier exam questions were basic in nature and covered general fundamental concepts on Microsoft Azure, including the following.

  • What is infrastructure as a service (IaaS)
  • What is platform as a service (PaaS)
  • What is software as a service (SaaS)

However, other questions were much more detailed, diving into online material from the Microsoft Azure documentation. I’ve included several examples of these below.

1. What type of storage does the hot access tier allow?

Answer: Standard storage

The hot access tier in Azure is used for files stored in standard storage accounts. It’s optimized for data that is accessed or modified frequently, offering the lowest access costs but higher storage costs compared to other tiers. This tier is particularly suitable for scenarios where data needs to be readily available for active use.

2. What type of storage does the cool access tier allow?

Answer: Standard storage

The cool access tier in Azure is designed for infrequently accessed data and is typically associated with standard storage accounts — not premium. Premium storage accounts, on the other hand, are optimized for high-performance workloads and do not support the cool access tier. Premium storage is better suited for scenarios requiring low latency and high throughput, such as virtual machine disks or databases.

3. How many days does the cool access tier retain files for?

Answer: 30 days

The cool access tier in Azure holds files for a minimum of 30-days. If you move or delete data before the 30-day retention period, there are additional charges. This tier is for infrequently accessed data and is priced for long-term storage when data doesn’t need to be retrieved often.

4. Can a Microsoft Entra tenant be enabled with dynamic policies?

Answer: Yes

Dynamic policies in a Microsoft Entra tenant can be used. Microsoft Entra supports dynamic group membership and Conditional Access policies, which allow you to automate access control based on user attributes or risk levels. Examples of this include dynamic groups where membership is automatically updated based on user properties (i.e., department or job role), in addition to conditional access policies that activate based upon sign-in or user risk levels for multi-factor authentication.

5. Do you use ARM Templates or RBAC for consistent policies and corporate governance when deploying Azure resources to meet policies?

Answer: Azure Resource Manager Templates (ARM)

Azure Resource Manager (ARM) Templates are used for ensuring policies and corporate governance with Azure resources. Role-Based Access Control (RBAC) manages access to resources by assigning roles to users, groups, or service principals, but does not enforce deployment policies or governance.

6. Is Microsoft Entra the same as Active AD or Azure AD?

Answer: No

Microsoft Entra is not the same as Azure Active Directory (Azure AD) or Active Directory (AD) but is the newer form of Azure AD. Microsoft Entra includes Azure AD along with other identity providers and is a cloud-based identity and access management solution. It’s also an overarching brand that includes Azure AD and Entra Permissions Management and Verified ID for access management. Active Directory (AD), on the other hand, is the traditional on-premises service for managing identities.

Was it worth it?

I found studying the material and completing the certification to be very interesting and worthwhile overall.

I believe the content not only boosts my resume and credentials with recent cloud technology, but it also enhances my understanding of cloud technologies used within DevOps in my own work.

In fact, following the certification, I began utilizing Docker containers to package web application demos that I’ve published on GitHub. Containerization was a concept covered in the certification topics, specifically for describing application hosting options, including web apps, containers, and virtual machines.

I discovered that Docker containers were an amazingly convenient way to share and instantly launch web applications without requiring users to download repository source code or even build the code and libraries themselves.

♦Photo by: DALLE.What do I plan to do next?

The Microsoft Azure Fundamentals AZ-900 certification enhanced my credentials with a refresh of cloud technology and opened up doors to potential new opportunities.

Now that I’ve completed the certification, I’ve already taken steps to update my resume and leverage cloud concepts in projects.

I now plan to continue my certification path by studying for the Microsoft Certified Azure Data Fundamentals DP-900. This certification is another core concept, specific to databases and data science, that is highly desired by employers.

My hope is that by combining certifications in both cloud technology and data, in addition to my certifications in AI, I can boost my background and expertise to keep my skills updated into the future.

About the author

If you’ve enjoyed this article, please consider following me on Medium, Bluesky, LinkedIn, and my website to be notified of my future posts and research work.

Microsoft Azure AZ-900 Certification: My Personal Journey was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


KW Predatory Volley Ball

Congratulations Mia Crawford. University of Waterloo Commit

Read full story for latest details.

Tag(s): Home

KW Granite Club

Meeting Room now looks amazing!

Years of water damage from the leaking roof made this space in our club unusable. Now it is being used for board rooms, yoga classes, ballroom dancing etc. The City of Waterloo repaired the room, and our GM, John Thomas, ordered and helped install the furniture, blinds, TV etc. We are very appreciative of the group of volunteers that worked tirelessly to install the new floor. Huge thank you to Marty Bell, Carl Keller, Marcus Baker, Martin Rombout and Dave Zenger (absent on picture day)!


Kitchener Panthers

Panthers suffer school day setback

KITCHENER - A rough seventh inning for the Kitchener Panthers.

The Chatham-Kent Barnstormers scored eight runs in the inning, en route to a 12-5 decision in front of over 1,000 students in Waterloo region Thursday morning. The contest was Kitchener's final exhibition game.

The Panthers stormed out to a 3-0 lead in the first, but the Barnstormers caught up by the fifth.

The game went back-and-forth, and Kitchener had a 4-3 threshold until the Barnstormers took over in the seventh.

Evan Elliott struck out four in three innings of work, and gave up two runs on three hits.

On offense, Yordan Manduley, AJ Karosas and Klaus Aplevich each picked up two hits.

Charlie Towers and Wander Santana both had a hit and an RBI.

These teams will meet again in a home-and-home this weekend in regular season play.

First, Chatham on Saturday. Then, the Panthers host its home opener Sunday at 2 p.m.

GET YOUR TICKETS NOW and #PackTheJack all summer long!


Code Like a Girl

SQL Views vs Temporary Tables: Explained with Real-Life Analogies

Let’s be honest — we’ve all done it. I used to rewrite the same complex query over and over when I started learning SQL, thinking it would…

Continue reading on Code Like A Girl »


Code Like a Girl

Building Real AI Systems

My Notes on Chip Huyen’s ‘AI Engineering

Reading Chip Huyen’s “AI Engineering”, I've learned a lot about the revolutionary foundation models which are changing software engineering and even the AI itself. I am now clear regarding how AI is going to turn the tech world upside down and evolve.

If you’re interested in the key insights of this book, this is what caught my eye. For those of you who have already read it, this should be a useful refresher.

Please note that I have focused virtually on the most interesting ideas, and skipping some background info.

♦image from O’reillyChapter 1: The Rise of Foundation Models

I believe that foundation models have completely changed the whole scenery of AI. Nowadays, we don’t even start by designing our own models; we typically use pre-trained models. Chip Huyen explains how this move brings AI to experts, allowing them to harness powerful tools. This chapter is in itself a journey, in retrospect and through vision, with AI from its prehistory to the sophisticated systems of today. A major takeaway is the tangible impact of AI on society, just how critical the need for expertise in AI engineering has become.

The sheer versatility of foundation models is best exemplified by the Super-NaturalInstructions benchmark (Wang et al., 2022), which highlights the multiple functions the models are capable of conducting across translation, question answering, sentiment analysis, and others (see the following picture).
♦Image from AI Engineering by Chip HuyenChapter 2: Understanding Foundation Models

When I learnt AI engineering, it became obvious that comprehension of foundation models isn’t optional — it’s a requirement. In this chapter, I learnt a more detailed and digestible version of the underpinnings of these models from Chip Huyen. If you’re tasked with AI systems building or fine-tuning this chapter underlines the most essential factors.

Starting with the basics: Training these models takes a lot of resources and expertise out of reach of many developers like me. There’s no need to construct all the pieces ourselves — we have models to use. We can use existing models. But, before we can safely pick and tweak models, we need to understand a few key things: Data models used for training, structure of data models, and adjustments made after training the models.

One of the central features discussed in this chapter is the notion of training data. The performance of a model depends on the data it used while training. A model without Vietnamese training data will bomb with Vietnamese text. When most of the training data emanates from Wikipedia, these models are not very good in technical areas such as Science or law. Knowing the data used to train a model, you are able to see its strengths and its weaknesses.

Transformers occupy a considerable part of the structure topic. Chip explains why the transformers became the reflection and why they still are the winning choice now. Still, she does not hide behind the questions: what amazing progress could await us after the transformer era? While the response is not entirely clear, this is a rather important trend for observation.

Size also plays a role. The chapter explains “scaling laws,” which help developers make principled choices as to the size of the model given historical and computational limitations. I see it as a triangle of trade-offs: parameters, tokens, and FLOPS (floating point operations). Building an overly large model if you do not have enough data or computational ability may simply make it redundant and useless.

♦Image from AI Engineering by Chip Huyen

How the model selects the next word is also an important factor; it is called sampling. Chip selects this as the top neglected concept in the sphere of AI, and this revelation really does speak to me now. Sampling approaches like temperature and top-k have a huge effect on the results generated by the model. Want fewer errors? Adjust your sampling.

Once the model has been trained, it is fine-tuned to fit human needs. More often than not, this translates to additional training, which is goal-oriented and where indications of end users are heard. Though there are several benefits, there is a risk of mistreating the performance in some of the domains from time to time. Developers have to balance this.

♦Image from AI Engineering by Chip Huyen

Though this chapter does not promise to transform you into a foundational model pro, it does provide builders and tinkerers with basic strategies for decision-making.

With the growth of artificial intelligence in day-to-day tools and services, understanding the basics enables to creation of structures that are both creative and reliable, with the needs of the users at the forefront.
Chapter 3: Evaluation Methodology

This chapter shifted my opinion on working with AI more than anything else. Although we could easily become excited about what they can offer, developers should check whether they accurately provide results, Chip Huyen argues. In this article, we discuss how to evaluate AI systems by taking a closer look at open-ended tools like chatbots and also creative platforms.

Assessment does not merely precede a final step in any process. It has to be used consistently and systematically from the onset to the end when establishing a system. What Chip had believed to be the key obstacle to the adoption of AI — evaluation — resonated with me after diving into this chapter. Outputs of AI systems that are unregulated pose real threats. The risks become apparent upon watching chatbots entertaining questionable guidance or AI tools creating false legal precedents.

What makes evaluation tough? Interestingly, foundation models are meant to function in an unfamiliar manner. Contrary to traditional models, which make a selection among a given set of outcomes, AI systems like the GPT generate unique variations in responses. The fact that many prompts have no single clear answer makes the process of evaluating metrics more complicated.

Chip presents a number of evaluation techniques that address the complexity of open-ended responses:

  • Although it is easy to compute accuracy scores, it is best when conducting tasks that have fixed known correct answers.
  • Subjective evaluation entails application of human assessors, entailing delays and high costs.
  • AI-as-a-judge is an innovative method in which models of AI judges evaluate the quality of other AI generated results. Although it is fast and efficient, questions about possible biases and reliability issues between various AI judges surface.
♦Image from AI Engineering by Chip Huyen♦Image from AI Engineering by Chip Huyen

An important lesson learned is to understand potential failure points in your system. Chip supports architectural changes that make failure mechanisms transparent so that personalised evaluation efforts can be developed.

We should also consider sampling- how models pick a word to suggest next. This aspect is more than a technicality. Although it increases a model’s ability to produce new ideas and options, it may also result in mistakes. Through polishing your sampling processes, you can significantly amplify the reliability of your product.

Chapter 4: Evaluating Foundation Models : The Most Challenging Parts of AI Engineering

After reading and going through Chapter 4, it became clear that the reason why AI systems often fail in the real world is that they lack reliable evaluation. There is not much self-deception in her evaluation — the dearth of relevant evaluation is still a significant impediment to AI deployment.

Moving ahead, our fundamental problem is: open-endedness. Unlike standard ML problems with clear right or wrong answers, GPT and other generative models generate unpredictable outputs influenced by their context. This creates a challenge because metrics such as precision or accuracy aren’t sufficient; evaluation is a complicated, multi-layered challenge.

Chip introduces three primary evaluation strategies:

  • Exact Evaluation: Accurate indicators like or BLEU score. Such metrics benefit structured tasks, like classification, but do not work so well when it comes to creativity.
  • Subjective Evaluation: Human rating of outputs. It may be the preferred approach, but it is very resource-intensive and time-consuming.
  • AI-as-a-Judge: Using automated approaches to evaluate the quality of AI-created content. It is helpful and applicable to many situations, where valid criticisms about possible biases can be raised.
♦Image from AI Engineering by Chip Huyen

One important takeaway for me was to emphasize careful tracking of experiments. We are prone to making changes to prompts or training data, without keeping track of them. Lacking systematic documentation, it will be so easy to encounter major complications later. To know what works Chip highlights the need to log all the variables — iterations of the prompt, rubric revisions, and demographics of users.

Another particularly interesting discovery that I made was: The log-priorities of the tokens generated by the model. Staring at a model’s confidence for individual tokens provides insights regarding its fluency, coherence, and possible truthfulness. I never would have guessed how much worth a logprob could add to a model output evaluation — this realization in and of itself is game-changing for the work that I do.

The topic also includes using the perplexity to rate fluency and the return of traditional NLG metrics such as relevance and faithfulness. However, such approaches are applicable only in special situations. Open-ended tasks frequently require customised scoring metrics that adapt to your very specific application.

A big eye-opener for me: Although helpful, MCQs do not make for an ideal metric to compare generative models. They test recognition, not generation. Still, there are a few research that remain with MMLU-style metrics ignoring the areas where models shine or struggle catastrophically in generation tasks.

Chip emphasizes multi-layered evaluation pipelines. You can utilize naive classifiers for detecting overall performance and then rely on more refined human or machine arbiters for all details. By dividing evaluation data according to user types or input categories, you might discover unanticipated biases or performance problems.

And don’t underestimate hallucination. It’s not random — it arises from models believing generated information; is accurate. In a case study from the book, it is shown how a model (with misinformation) judged a shampoo bottle to be a milk container. Why? It told lies to itself and treated them as actual facts.

Primarily, Chip argues for the evaluation to be embedded at every stage, from model selection, throughout development cycles, or even post-production. Establishing robust sets of annotations for tests should be an integral part of the process, not just nice-to-have frills and not an afterthought.

This chapter reshaped my approach. For AI applications where trust becomes a key factor, evaluation needs to be a continuous process, targeted, and core to our procedures. Measuring it as a QA checklist is a shallow approach: it’s the foundation on which real-world success is built.

Chapter 5: Prompt Engineering — The Art of Asking AI the Right Way

Prompt engineering was the very first model adaptation technique I ever discovered — and the one I might underestimate. In chapter 5 of AI Engineering, it turns out prompt engineering is less about crafting witty questions and more about excelling at communicating with smart machines. Chip Huyen leads us on a technical and philosophical odyssey and makes “playing with prompts” an engineering practice.

We begin with the fundamentals: a prompt is a direction to a model. That might be a straightforward question, such as, “What is the capital of France?” or a multi-action assignment like, “Break down the following sales report and summarise its insights in bullet form.” Prompts may involve:

  • Task specification (what is to be done),
  • Examples of desired input-output behaviour (few-shot),
  • Context or previous conversation (particularly for chat interfaces).

What is distinctive about prompt engineering is the fact that it doesn’t modify the model’s weights — it is completely input-centric. This makes it the best and least expensive method to fine-tune a foundation model.

Don’t let its simplicity fool you. Prompting has its nuances. The location of your instructions — beginning or end — can significantly impact performance. Chip has empirical data: models such as GPT-4 respond better to instructions at the start but others like LLaMA-3 might like them at the end.

We also learn to test robustness. If replacing “five” with “5” causes your model’s response to break, it’s not robust. Red flag. More robust models are less fragile to such perturbations to prompts, and so fiddling with prompts is a good proxy for model quality.

Chip believes in approaching prompt engineering scientifically. Keep track of experiments, test variations, and systematically optimise. It’s quite like A/B testing in product development.

One of the most relevant sections of interest here is prompt security. While prompts can also mislead models as much as they direct models, malicious users can cause models to ignore instructions by fooling them through prompt injection attacks. This is most dangerous in environments with multiple users like in finance or customer support. Some defensive methods are:

  • Input/output filtering
  • Escaping unsafe tokens
  • Utilizing system prompts to constrain model behavior
  • And more broadly, treating prompts as if they’re code: structured, vetted, and protected.

The chapter also discusses in-context learning:

  • Zero-shot prompting
  • Few-shot prompting
  • Chain-of-Thought (CoT) prompting, which requires the model to reason in a step-by-step
♦Image from AI Engineering by Chip Huyen

Chip demonstrates tangible benchmarks: by utilising CoT, Gemini Ultra’s MMLU, score was boosted from 83.7% to 90.04%. That’s a testament to how influential structured prompting can be, even when contrasted with fine-tuning.

Ultimately, this chapter transformed the way I perceive prompt engineering. And when executed optimally, it enables us to harness a model’s full power without incurring a single cent on training costs.

Chapter 6: RAG and Agents — Providing Models with Memory and Tools

I had not yet realised how much context informs the behaviour of AI systems before reading this chapter. Chapter 6 of Chip’s AI Engineering is a technical deep dive into two of the most influential patterns used to scale AI applications: Retrieval-Augmented Generation (RAG) and agentic systems.

Foundation models are strong but forgetful. They are unable to recall document-length text or maintain a track of changing conversation. To solve this problem, we have to equip them with tools to retrieve and engage with related information — welcome to RAG and agents.

RAG: Retrieval-Augmented Generation♦Image from AI Engineering by Chip Huyen

RAG alters the paradigm by allowing models to retrieve useful external information prior to generating a response. Rather than providing the model with a behemoth prompt containing all possible data, you can:

  • Pull out only the relevant parts using techniques such as vector search.
  • Pass it on to the prompt.

The consequence? More relevance and less hallucination.

RAG is analogous to feature engineering for foundation models. While traditional ML had you hand-engineer features, RAG has you engineer the appropriate context. Chip notes that this pattern excels when your app relies on private knowledge bases or domain-specific material.

The mechanisms of RAG include:
  • An embedding model which transforms documents and queries into vectors.
  • A search system (e.g., Weaviate, FAISS) to retrieve the most relevant documents.
  • A composer who combines all these into a request to the model.

One interesting real-life example I discovered: a user queries, “Can the Printer-A300 print 100 pages per second?” The system pulls in the manual section with specifications and includes it in the model’s prompt — providing grounded and accurate responses.

Agents: Equipping Models with Tools and Autonomy Agents bring it to the next level. Not merely fetching static information, agents are able to utilize tools, plan and execute actions. They’re how you transition from “static chatbot” to “AI assistant scheduling appointments, checking the weather, and following up.”

Chip refers to agents as models implemented using APIs or plugins — such as web search engines, scheduling applications, or CRM applications. The intrigue lies in the dynamic interactivity. Agents may:

  • Think about their actions
  • Make Multi-Step Decisions
  • Use buffer memory
  • Call outsource functions

But it’s not completely rosy. Agents are fragile. They call the wrong tool, don’t plan properly, or delude themselves into taking fictional steps. Managing their own memory is vital. Strategies are:

  • FIFO memory: Store the most recent turns only
  • Summary Memory: Recap past conversations
  • Long-term vector memory: Retrieve by similarity

The architectural considerations are enormous. Your app architecture becomes a modular, dynamic system with layers of search, planning, and decision-making when using RAG and agents.

This chapter made me rethink the architecture of AI. Whether you are designing systems that require reliability, context comprehension, or autonomy, RAG and agents aren’t “nice to have” but are a necessity.

Chapter 7: Finetuning — Should You Train or Tune?

I used to believe finetuning was a matter of having a large GPU and lots of data. That was all blown out of the water by chapter 7 of AI Engineering. Chip Huyen takes us through why and when and how to finetune foundation models — and how to do it without breaking the bank.

The chapter begins with a fundamental fact: you might never have to finetune at all. Prompt engineering and retrieval-augmented generation (RAG) already yield strong customization as it is, and finetuning is more a last resort than a first action. Chip provides a framework to guide you to decide: if your system requires behavioral change greater than prompting and RAG, and if you have good data, finetuning is the optimal choice.

Full Finetuning is Obsolete:

Traditional finetuning finetuned all model weights, which was possible when models were tiny. But when models grew out of control, it became infeasible. Finetuning all parameters of a multi-billion-parameter model takes up too much memory and compute, something most practitioners do not even have.

The industry moved to more parameter-efficient fine-tuning (PEFT) methods instead.

♦Image from AI Engineering by Chip HuyenEnter PEFT and LoRA:

Chip discusses several PEFT approaches, beginning with LoRA (Low-Rank Adaptation).

LoRA is a star among finetuning methods because it:

  • Keeps the base model on hold.
  • Injects light adapter modules
  • Needs fewer resources,
  • And enables modular deployment (e.g., changing adapters to accommodate different scenarios).
♦Image from AI Engineering by Chip Huyen

She even deconstructs the architecture of LoRA and how it differs from partial finetuning when a subset of layers are updated. Surprisingly, LoRA consistently beats partial finetuning on both sample and memory efficiency, but at the expense of a slightly greater inference latency.

This modularity also creates a system referred to as model merging — taking multiple specially fine-tuned adapters and merging them into a single model. This is particularly beneficial to deploy on edge devices or scenarios in which multiple features are to be packaged into a single model.

The Bottleneck Isn’t Finetuning. The real bottleneck is data. There’s a single thing to learn from this chapter: Finetuning is not difficult. Good data is.

Finetuning relies on instruction data to work properly. High-quality and neatly labeled data is costly and time-consuming to collect. This sets up a paradox: you can simply add a LoRA adapter to your model, but if your data aren’t clean, the output will be trash.

Trade-offs and Advice:

Throughout the chapter, Chip highlights trade-offs:

  • Parameter efficiency vs. latency
  • Data Quality vs. Scale
  • Customization versus generalizability

She also discusses quantized training and distillation, and how both of these methods complement the larger toolkit of tuning.

At the end of the chapter, I realized that finetuning is not about brute force anymore. Finetuning is about precision, minimalism, and informed decisions — that’s a mindset any modern AI engineer should adopt.

Chapter 8: Dataset Engineering — The Unseen Backbone of AI Success

If I’ve taken anything away from working with AI, it’s this: however advanced your model is, it’s as good as the data. Chapter 8 of AI Engineering by Chip Huyen delves deep into the fundamental reality and demonstrates why dataset engineering is the most underappreciated but essential skill in AI.

Why Data is Truly the Differentiator:

As models become commoditized, firms are unable to count on model innovation anymore. Chip maintains that if it contains high-quality domain-specific or proprietary data, your dataset becomes your differentiator. Since compute is readily available and open-source models abound in the market, data is the moat now.

Still, dataset work is unglamorous. Chip is refreshingly blunt about it here: “Data will mostly just be toil, tears, and sweat.” She is correct. Good data work involves tedious and iterative work, but it’s also what distinguishes toy prototypes from solid AI products.

Key Pillars of Dataset Engineering:

The chapter splits dataset work into three key activities:

  • Curation — What data do I require? In how much quantity? Where do I get it? How do I maintain quality?
  • Synthesis — Leveraging AI itself to produce annotated examples, particularly when it is too expensive or time-consuming to get it from humans.
  • Processing — Cleaning and de-duplicating and formatting data to make it usable.

I was glad to see Chip’s emphasis on dataset life cycle. She discusses how pre-training and post-training call for varying data approaches:

  • Pretraining: Emphasizes on breadth (expressed in tokens).
  • Posttraining: Emphasizes depth and clarity (measured in examples).
♦Image from AI Engineering by Chip Huyen

And it’s nonlinear — you’ll constantly go back and forth between curation, synthesis, and cleanup. Chip recommends approaching dataset creation as software development: version control, documentation, reproducibility.

Human vs. Synthetic Data Human-labeled data, particularly instructions and conversation, is still gold. It’s costly, however, Chip estimates a good (prompt, response) pair to cost $10 and thousands to train instruction-following models. No wonder firms like OpenAI hire graduate-degree-carrying professional annotators.

On the other hand, synthetic data (data created by AI itself) is taking hold. Faster, scalable, and inexpensive — but dangerous. Unless it’s carefully filtered, you’re left with self-reinforcing biases or poor quality signals. Nonetheless, lots of start-ups are using it to bootstrap new models successfully.

The Shifting Data Landscape:

Another wake-up call: web data is less open than it was before. Chip mentioned earlier in the book how sites such as Reddit and Stack Overflow have imposed restrictions on data access and how copyright wars are intensifying. This has caused firms to enter into licensing arrangements with publishers or dig up internal corpora — emails, contracts, help tickets — to produce private datasets.

She also refers to a chilling trend: the web is filling up with content created by AI. Future models learn from this “echo chamber” and may perform worse as a consequence. Having a human-originated, high-quality dataset may become a luxury most firms are no longer in a position to afford.

Chapter 9: Inference Optimization — Making AI Cheaper, Faster, Smarter

I read Chapter 9 and discovered the pulse of production AI: inference optimisation. Not how much it does — how fast and affordably it does it. No consumer will wait five seconds to hear back from a chatbot. No firm will pay $10,000 per day to avoid it. Chip Huyen puts this problem into stark relief and shows a toolkit of methods to make foundation models feasible at scale.

What is Inference?

Chip first defines the distinction between training and inference. While training refers to the process of educating a model, inference refers to applying it to make predictions in real time. Most AI engineers work with inference more than training, particularly if you are using pre-trained models.

The inference server executes the model, dispatches user requests, manages hardware allocation, and returns responses. Speed is not the issue — it’s a coordination problem mixing model design, systems engineering, and hardware planning.

Bottlenecks and Metrics Inference are prone to fail because of two fundamental bottlenecks:

  • Compute-bound operations: hampered by math-intensive operations (such as matrix multiplication).
  • Tasks bound on memory bandwidth: hampered by data transfer between CPU/GPU.
♦Image from AI Engineering by Chip Huyen

They guide you in selecting the correct hardware or model settings. Chip takes us through latency measurements such as:

  • Time to first token
  • Time by token
  • Total query latency

Understanding which of the following is most important to your user flow is vital. You may accept more verbose responses if the first token appears immediately, say.

Techniques to Optimize:

Now to the point: how to make inference faster and more affordable.

  • Quantization — Lower model precision (e.g., float32 → int8). Saves space, compute and money.
  • Distillation — Train a smaller “student” model to approximate a large “teacher.” Faster but a bit less precise.
  • Caching — Store results to avoid multiple querying. Straightforward and efficient.
  • Batching — Run multiple requests together. Improves GPU utilization but adds waiting time.
  • Early stopping — Implement constraints on how many tokens to produce, or when to halt on certain criteria.

They all have trade-offs: you may gain speed at the expense of small performance losses. Chip forces us to strike a balance between latency, price, and quality, particularly on user-facing products.

Model Parallelism:

If a model is too large to fit on a single GPU, model parallelism divides it across devices:

  • Tensor parallelism: divides math operations
  • Pipeline parallelism: splits model stages
  • Split the computation or input by function.

This applies more to teams having their own models. For others who are using APIs (OpenAI and Anthropic), the lesson is to understand what is underneath the hood — so in case it’s necessary to scale, you’ll make informed decisions.

Business Impact Chip ends with the harsh reality: optimisation is not a choice. Inferencing costs increase linearly as a function of usage. Every 10,000 users means a potential daily spend of thousands if you don’t optimise.

What hit me most was her framing: inference is not simply a back-end issue — it’s a product feature. Users experience it. Companies pay for it. And good engineers learn it.

Chapter 10: AI Engineering Architecture and User Feedback — From Prototype to Production

As I wrapped up AI Engineering, Chapter 10 pulled everything together. It’s not just about clever prompts or smart finetuning — it’s about how all these parts interact in a real system. This chapter walks us through the evolving architecture of AI applications and why user feedback is the backbone of iteration and trust.

Building Your AI Stack, One Layer At A Time

Chip starts with the simplest version of an AI app: a user sends a query, the model generates a response, and that’s it. But as any developer knows, this doesn’t scale. Real-world AI apps need guardrails, memory, context injection, and optimization. So the chapter introduces a modular architecture that evolves over time:

  • Basic Model API Call — No augmentation or caching.
  • Context Construction — Add external knowledge via RAG or tool use.
  • Guardrails — Protect against harmful inputs and outputs.
  • Routing and Gateways — Support multiple models and APIs.
  • Caching — Speed up frequent queries.
  • Write Actions and Agent Patterns — Let AI perform actions like booking or writing.

Each addition boosts capability — but also introduces complexity and failure points. Chip stresses the need for observability, with logging and metrics across all layers.

Guardrails: Safety Nets for AI

As models get smarter, the risks increase, especially with tools and write permissions. Guardrails are the protective layer:

  • Input Guardrails detect and block harmful prompts (e.g., prompt injection).
  • Output Guardrails check for toxic or unsafe model outputs.
  • PII Handling masks sensitive data before sending it to external APIs.
♦Image from AI Engineering by Chip Huyen

She also dives into real-world risks: what if a user pastes private info into a prompt? What if an AI agent triggers a bank transfer? These aren’t hypotheticals anymore.

User Feedback: The Ultimate Model

Optimizer AI outputs don’t improve on their own. Feedback is how we learn what’s working and what’s not. But here’s the twist: natural language feedback is easy to give and hard to parse. Chip outlines methods to extract structured signals from conversations, like:

  • Explicit thumbs-up/down
  • Implicit signals (time spent, follow-up questions)
  • Flagging problematic behaviour

She urges us to design feedback systems upfront, not as an afterthought. It should be a core part of your data pipeline and evaluation loop.

The Rise of Full-Stack AI Engineers

This chapter highlights a broader shift — AI engineering is merging with full-stack development. Thanks to APIs like OpenAI’s Node SDK and frameworks like LangChain.js, more frontend developers are entering AI. Those who can build fast, iterate fast, and collect feedback fast are the ones who’ll win.

What really stuck with me?

You don’t need a giant model or deep ML expertise. You need good systems thinking, strong UX, and a feedback loop. That’s the new stack.
Conclusion

I hope this summary gives you a good overview of what to expect from “AI Engineering.” The book taught me valuable skills in AI, especially in understanding the full engineering lifecycle, not just building models. I’m still reading and learning the concepts in this book, but I wanted to share these insights as they’ve already transformed my understanding of the field. Whether you’re a developer, product manager, or just an AI learner, I hope my summary gives you a short explanation for why this book is worth your time.

Building Real AI Systems was originally published in Code Like A Girl on Medium, where people are continuing the conversation by highlighting and responding to this story.


Children and Youth Planning Table of Waterloo Region

CYPT Communications Action Team Update

As you may already know, the Children and Youth Planning Table has Support Teams that work year-round to improve child and youth well-being by leveraging expertise in their specific focus area. The CYPT currently has two active Support Teams:

  1. Data, Research, and Evaluation Team (DRE)
  2. Communications Action Team (CAT)

 

CAT was originally created because the CYPT had no dedicated Communications staff. It was a group of communications professionals from CYPT member organizations who advised and did communications work for the CYPT. After the CYPT Communications Coordinator position was created, CAT evolved into a space focused on networking, sharing updates, and professional development.

 

In Summer 2024, a very similar group to CAT began forming in the community called REACH. They are a community of practice for communications professionals working in the non-profit space. Their meetings are structured the same as our CAT meetings have been – with community updates, networking, and professional development. 

 

With the new 2024-2027 CYPT Strategic Plan, and the formation of the REACH Communications Group in the community, the CYPT’s Communications Action Team is shifting its focus. CAT will become more action oriented – with the goal of producing work for the CYPT and contributing to the CYPT’s Knowledge Mobilization and Influence efforts. Please stay tuned for updates about what that will look like and how you can get involved.

 

At this time, Stacey McCormick, the previous Co-Chair of the CAT group, will be stepping out of the role. We would like to send her a huge thank you for all the work she’s done over the years with CAT! Stacey will remain on the CYPT Steering Committee as she is the Co-Chair of CYPT’s Early Years Community of Practice. 

The post CYPT Communications Action Team Update appeared first on Children and Youth Planning Table.


Elmira Advocate

PUBLIC RELATIONS PUSH IS IN HIGH GEAR AT LANXESS CANADA, ELMIRA

 

Well why wouldn't it be? They sure as hell don't want to spend the appropriate amount of money to either 1) clean up the Elmira Aquifers to drinking water standards 2) clean up the embedded Uniroyal toxins (DDT, dioxins etc.) in the Canagagigue Creek sediments and soils 3) clean up the sub-surface pig pen that is their site. After a tour of their east side property  Luis and Hadley bulls*itted Sebastian and others stating that nothing flows eastwards off their site. This is despite topographical contours showing off-site eastwards flow from their property onto the Stroh property which Lanxess and GHD have refused to discuss intelligently or honestly.

So the most recent PR crap from Lanxess is a light meal and a plant tour for selected individuals. Some of the individuals invited are woefully out of their depths for the very obvious reasons that they've never seriously applied themselves to studying the issues in depth. Many have known for decades that their bread is buttered by Woolwich Township blind loyalty hence what need is there for them to actually know and understand the facts? Others such as Pat Mclean no longer even live in Elmira or Woolwich Township but continue to attend more to socialize than anything else. Seriously though she never did have a serious grasp on the details as she left the reading of technical reports to others such as myself.

Others are attending and sucking up mostly to get appointed to TRAC or whatever the next iteration of a tame, deferential "citizens'" committee will turn out to be. All in all more of the same as talk is so much cheaper than cleanup and always has been. 

Thankfully I was not invited to their dog and pony show so I don't need to formally tell them to p** off. If and when they are dragged kicking and screaming to real discussions/negotiations then maybe, just maybe depending on all the circumstances and conditions I might have an interest. Until then just like Vladimir Putin they are merely liars and obfuscators wasting everybody's time and resources for their own purposes.


James Bow

Assemblies of the Eighties

♦This image is entitled MRSS Year 12 Steiner last day 2007 by Gavin Anderson. It is used in accordance with his Creative Commons license.

They still have school assemblies with visiting performers these days, don't they? I know they have author visits, and I know there are public safety mesages, talent shows, school plays and general seasonal celebrations, but were they like some of the presentations I saw in my childhood?

I remember one presentation in particular where we were hauled down to the gymnasium at Lord Lansdowne Public School and treated to a story about gas -- natural gas, to be exact. It featured a mad scientist (white coat, funny hair, glasses, the lot) in his mad laboratory, working with natural gas. And by some science magic, he manages to resurrect a dinosaur from the fossil fuel, performed by another performer in a large felt-covered suit (which must have been extremely hot under the lights).

The presentation went over all of the safety issues around natural gas, including how to smell a leak and what to do then, all told in such a way as to make the kids respect, but not fear, natural gas. They end with a song and dance number where the nerdy mad scientist dances comically badly.

Looking back, one thing that struck me about this presentation is that they went beyond safety to talk about the future of gas. The mad scientist pulled down a projection map of Canada which showed where our active gas reserves were, and he confidently told us that there was enough gas here (in and around the Rockies and in Ontario) to last us thirty years.

"Ahem," says the dinosaur. "I don't know about you, but I think these kids here plan to be around for longer than thirty years."

"Ah, well, you see," said the mad scientist, and he proceeds to highlight possible but untapped resources through the maritimes, and across the Arctic, boasting, "these should last us well into the next century!"

So...

Up to this point, this presentation was like the other public safety presentations where we were taught to respect but not fear a particular item, like police officers talking about drugs (and hauling out the incredibly creepy Blinky, the doe-eyed police car), and how prescription drugs were okay but you still had to be careful around them, or Hydro officers telling us to watch out for downed power lines after storms and not play with them like Indiana Jones and his whip. But none of the other presentations got into the subject of the future of Canada's resources.

Which makes me wonder who actually put on this presentation. A local gas company would be expected to tell kids about how to smell a gas leak and what to do if you do, but I doubt they'd have any interest in encouraging kids to support gas extraction in the Arctic. That smells like an industry public relations organization.

Well, as I'm still remembering this almost forty years down the line, all I can say is that their PR department got their money's worth.

(UPDATE): Doing a quick bit of research, I see that the gas company Enbridge is quite happy to speak to schools in this day and age but, again, more about being safe around gas rather than where to drill for it.