Sailing Weather Service Chief Meteorologist Chris Bedford got interviewed by the BMW ORACLE Racing media crew. He chats about the unique weather strategies needed for the 33rd America's Cup.
The New Year is often a sign of change but for Key West a familiar weather pattern is emerging. A recent North Sails Blue Weather Center article spoke of this familiar pattern. Competitors are urged to read this article found in the Weather Center at na.northsails.com. The article outlines the typical Key West weather cycle in January. The cycle follows a simple and predictable pattern lasting between five and seven days based on the position of high pressure centers and fronts. A general understanding of the cycle can go a long way to predicting the conditions you will experience on the racecourse.
This season the cycle has occurred regularly since the middle of December. Those watching the weather may have noticed the cycle since it is easily observed using the surface weather maps in the North Sails Blue Weather Center. If you have been unable to take advantage of the Weather Center, now is an excellent time to start because racing is only a few weeks away.
Author: Matt Sanders, SWS Meteorologist Very consistent sailing conditions are frequently experienced during Key West Race Week. Often, nailing subtle wind shifts is the key to success. The typical Key West weather in January follows a relatively simple and predictable cycle lasting between five and seven days. Predicting individual shifts is difficult, but a general understanding of the weather patterns can be a big help in defining what shifts you will experience.
To understand the cycle, let’s say that the first day of racing finds a recently passed cold front stationary or dying south of the racing area.
Author: SWS SWS sponsors Univeristy of Michigan Solar Car Team
Sailing Weather Service, LLC recently sponsored the University of Michigan (U of M) Solar Car Team in the World Solar Car Challenge, held in the Australian Outback in October. The Challenge started in Australia’s Northern Territory and ended in Southern Australia, requiring racing across Australia’s vast, hot and most desolate lands. The team finished the 3021-kilometer course third in a field of more than 30 cars. The team used input from Sailing Weather Service’s proprietary high resolution weather modeling to assist determining an optimum battery charging strategy.
Sailing Weather Service, LLC provided access to proprietary high resolution meteorological modeling and computer resources that the U of M student team used to run twice-daily simulations of the weather conditions across the entire Australian Continent. Predictions of cloud cover and incoming solar radiation – variables that are vital to predicting the charging efficiency of the Solar Car batteries – were made along the entire race route from the coastal city of Darwin in the tropical Northern Territories south to Adelaide on the nation’s south coast. In addition, Sailing Weather Service provided consulting to the team on setting up and operating the models , a technology transfer component critical for this student-run competition.
“As a University of Michigan alumnus, I am especially proud of our partnership with the students in this exciting green challenge,” said Sailing Weather Service Founder and Chief Meteorologist Chris Bedford. “Our experience in the application of state-of-the-art numerical weather modeling to strategic decision-making in competition makes this partnership a natural for us. We are particularly excited about the ability to showcase our technology in the area of alternative energy, where the accurate and timely prediction of weather conditions is vitally important.”
Sailing Weather Service’s Chief of Meteorological Modeling, Matthew Jones, worked closely with Chris McMeeking, the Solar Car Team’s student-meteorologist, transferring models to the University’s computing resources and providing advice on use and application of the information. Alex Dowling, U OF M Solar Car Team Strategy Division Manager, remarked that “we are pleased to have connected with Sailing Weather Service to improve our forecasting capabilities in Australia.”
Sailing Weather Service, LLC, is a premier provider of customized weather forecasting and consulting to high-demand marine customers and users of detailed weather information worldwide. Headquartered near Boston, Mass., Sailing Weather Service maintains a globally re-locatable modeling system that can be applied to focus on locations with forecasts at resolutions of less than 1 km and out to 120 hours. This modeling capability supports task-specific decision-making technology and is enhanced by our experienced meteorologists’ interpretation and clear communication of complex weather information. Learn more about Sailing Weather Service at http://www.sailwx.com.
To learn more about the University of Michigan’s Solar Car team, visit http://solarcar.engin.umich.edu.
Author: Matthew Jones 3 Common Mistakes in Interpreting Wind Models: Sinking in A Sea of Data
Over the last decade, wind modeling data has become increasingly easier to get your hands on. With the advent of cheap powerful computing technology, meteorological centers and private weather consulting firms alike are generating terabyte-wave after terabyte-wave of model GRIB (GRIdded Binary - a common meteorological data format) data and graphics. The end-user is rarely distinguished in the process, leaving sailors and race organizers to decide what data is best and most applicable to their need. Often, they access the same generalized data'sets that cyclists and city'planners use to make their decisions.
Navigators and race strategists are daily deluged by model forecasts and predictions, requiring them to take valuable time away from charting their race course to swim through the ocean of model products swirling around them. Designers, too, are left to dredge the databases of climate centers to find enough model data and observations to give a proper representation of the local weather when and where their boat will be sailing. It's all quite a headache. And so, given the tremendous analytical exercise that dealing with all this model data is, it is no wonder that users make several common mistakes made when doing so. Hopefully, the following can highlight some bad habits that many of us develop when trying to go from forecast to finish'line.
Mistake #1: “I always trust Model-A – it's super high resolution.”
Along with the push to run more models we've also seen a push to have those models at higher and higher resolution – both on the horizontal (100 km -> 10 km -> 1 km) as well as the temporal (3 hours -> 1 hour -> 15 minutes). This is not a bad thing, and in fact reducing those dimensions actually helps tremendously in studying coastal wind and weather: it aids forecasters not only in developing and visualizing their own intuition of the local weather, but also in understanding the subtleties of patterns like terrain-following flow or sea-breeze fronts.
However, a model's greater precision does not necessarily lead to greater accuracy. For instance, it would be unwise to take a model forecast wind speed of 13.3333 knots and have any confidence that the model can predict to the nearest one-ten-thousandth of a knot. Likewise, we should not think that simply because a model calls for a 45-degree left shift across a 1km-by-1km area at 1415 in the afternoon, that such a shift would occur at exactly that time over exactly that area. Models simply cannot be that precise and have any accuracy over a significant period (reasons of which are described later). Perhaps once in a while, such precise predictions will be right on, but those occasions are impossibly rare to point of being random occurrences.
We should not take that precision as a forecast answer but rather a forecast guide. We should think features. That is, a high-res model may predict the timing and speed of a shift while missing the magnitude of that shift. Or perhaps the high-res model predicted the shift's speed and magnitude quite well, yet missed on geographic location. When dealing with high-resolution models, we should use its precision less as absolute truth and more as applicable tool.
Mistake #2: “I never trust Model-B – it was terrible for Event-X.” Let's use the example of Peter Sailor for this one. Peter Sailor was competing in Key West once summer. He got himself access to a particular model (say, the GFS) and analyzed its output every morning before heading out on the water. The GFS did poorly that week, and using it alone to make critical decisions, Pete sailed poorly as well. From that day on, whenever or wherever Pete sailed, he never again looked anywhere near the GFS, and every chance he took he belittled the GFS and everyone (and their sister) who ever used the GFS.
Does that sound familiar? The mistake Peter Sailor made (aside from bringing sisters into the conversation) was in transporting a model's skill from one venue to another and from one time period to another. There is a lot about weather that model developers do not and cannot know – especially when it comes to the surface layer of the atmosphere. In fact, to get around the unknowns, and to fill out their equations, model developers often customize different aspects of a model to handle specific weather phenomena like heat-waves or hurricanes and specific time periods like summer or winter. There is no single model configuration that can correctly predict every region on the planet and every season of the year. Instead, local models are usually customized for a particular venue and time period of interest. That is why a model run over Key West at a given time could do poorly, but move that same model over New England, or even into next week, and it could do quite well. A model's skill can change quite significantly day to day and venue to venue. Taking a bad experience with a weather model and using it as a global characteristic can handicap us away from a proper picture of the forecast. And by the same token, playing favorites can be dangerous.
Mistake #3: “Model-C ≠ Model-D. They both must be wrong!”
With all those weather models being produced and distributed, it should come as no surprise that occasionally (in fact, regularly) they will disagree on a forecast. When this happens, most of us weather model consumers do one of two things: either we ignore the weather model that gives them an inconvenient forecast and accept the more favorable forecast (“Tomorrow Model-C says it will blow 5 to 10 knots with sunny skies, and I want it to go sailing tomorrow, so I'll go with Model-C.”), or perhaps more commonly, we simply ignore both forecasts and seek a third forecast, assuming the first two must be wrong. In a clean and orderly world, this may be the right tack, since in a clean and orderly world, all weather models would agree. There is, however, a different phenomenon that rules the day beside order – chaos.
Chaos theory helps us realize that in the supremely complex system that is our atmosphere, we can never know exactly what the entire atmosphere of our whole planet is doing at any time, especially right now. In fact, given our technological limitations and the simple logistics of operating a global network of weather observing stations, we can hardly know anything about what even a part of the atmosphere (like the lowest 10m) is doing over a part of our planet (like over the Mediterranean Sea). Despite the enormous advances the meteorological community has made in using satellites to measure the weather, they still rely heavily on weather observing stations on the ground, and require weather balloons to be launched throughout the day.
Needless to say, there are significant unknowns in the system due to these limitations and the sheer complexity of the observing network. This is important to note, since the weather models we consult every day for our forecast use these observations as input to their equations that predict the flow of the atmosphere. That means before a weather model even starts running, there is error inherent in the system coming from an incomplete picture of what the weather is right now. Chaos theory again reminds us that even if these errors were small to begin with, they can grow very quickly, leading to large errors after just a short time of running the model. As if that were not enough, as we learned earlier, the models themselves are an especially imperfect representation of the physics of the lowest part of the atmosphere. So not only are there errors in the initial “guess” of what is happening right now by a model, but the equations it uses to guess what will happen 3 or 36 hours from now are equally replete with errors.
In essence what this means is that when two weather models disagree, they are both neither right or wrong. In fact, given the uncertainty in the system from observations and model physics, the most we can say is that the two weather model forecasts are simply possible with equal levels of probability. This concept is one that plagues many a weather model consumer – the idea of probabilistic thinking. When two or more weather models of similar type and skill disagree, we should not simply choose one out of a hat, ignoring the rest, but rather, take all the solutions as equally probable forecasts. If we do choose one as our forecast guide, we should use the others to form a forecast “envelope” of solutions. Such an approach not only would lead to a more objective forecast process, but also alert us to several different weather scenarios, thus reducing or eliminating the element of surprise in the weather on the water.
The sea-level of the ocean of weather models and model data is rising and will continue to rise in the years ahead. However, weather models will never be perfect, and if the all-too-common mistakes above are perpetuated, weather model consumers will continue to be weighed down by bad habits. Hopefully, with improvements to the observing network and enhancements of out knowledge of the atmosphere, those models will become increasingly skillful and easier to work with, helping us keep our heads above water. Weather consulting firms and knowledgeable meteorological consultants will also help us stay buoyant and on top of things.
Author: Matthew Jones Will global warming change the way we sail?
Climate change is on the tip of everyone’s tongue these days. From postmen to politicians and from your local deli to your local disc jockey, everyone and their brother are talking about climate change. Myriad articles have been written about the negative effect of global warming on ice caps, the positive effect of ice caps on global cooling, and the positively negative effect of said myriad articles on reforestation. Seemingly absent, however, from the ever-growing stacks of language and diagrams describing climate change and its causes is a discussion of the effect of climate change on sailing.