National usgs marine Geohazards Workshop Menlo Park March 1, 2011 Notes by Amy Draut Tom Brocher, welcoming remarks


Tsunami breakout group notes, 3/2/11 afternoon



Download 157.74 Kb.
Page5/5
Date18.10.2016
Size157.74 Kb.
#2355
1   2   3   4   5

Tsunami breakout group notes, 3/2/11 afternoon:
Gaps in knowledge?
Generation, deposits, and probabilities.

NOAA deals with propagation, warning.

USGS role is in terms of inundation, coastal effects.

Source catalog, with indication of likely importance for tsunami generation.

PTHA – more uncertainties, challenges regarding doing PTHA for landslides. This is a niche for USGS, not NOAA, because prehistoric event analysis is necessary. Need paleotsunami record, otherwise you get a gross underestimation of hazard. Probabilty assessment without field analysis of paleotsunamis is insufficient; the 1700 Cascadia event would have been unknown, e.g. Modeling must be informed by field studies of geologic record. Written records don’t extend back far enough in

time; those we do have are often anecdotal or altered (Soviet Union).

We want to know how representative the earthquakes we know about are. How thorough is our known earthquake catalog? Paleoseismic work and historical records (range of quality) informs that. Each region of interest needs comprehensive catalog of tsunami history; often, quantitative information is incomplete. Aleutians (especially Fox Islands) very incomplete, needs much larger catalog. In the Kuriles, have gone back ~5000 years. It helps a lot having mapped, dated tephras as markers. Revisiting earlier studies as new technology, information become available (e.g., Bradley Lake, coastal California).

How good is tephrochronology in different places? Not well developed yet for the Aleutians, especially as you go west. There is some in eastern Aleutians. Logistical challenges of getting there to find out.

Micropaleontology? Useful for some general characteristics of tsunami deposits but not for dating.

Effects of tsunamis moving onto land: effects of roughness elements on coastal tsunami behavior, in shallow water and onshore? Estuary and bay morphology, coral reefs, forests. Reefs, atolls can mitigate some damage to shoreline. Forests onshore may mitigate waves coming onshore. Shape of bays/harbors can increase or decrease onshore damage (Samoa 2009). Complex non-linear problems. What factors control inundation and tsunami behavior as it comes onshore? Needed for developing evacuation plans.

Understanding erosional and depositional characteristics of tsunamis.
Who are the users, and what do they need?

Communities, emergency planners, insurance companies. Coast Guard, Navy, harbors, fisheries. They use probability assessment (final product). To get to reliable PTHA as end product, need modeling, paleo record.

Within modeling, need source characterization, bathymetry, … many other things that feed into PTHA.

National Research Council report on tsunami program (STEPHANIE FORWARDED THE REPORT TO ME ON 3/2/11) highlights national characterization of tsunamigenic sources as a focal point for USGS research.

Need to demonstrate recurrence through time when informing local communities/public (“time evolution” – Roland von Huene).
PTHA needs:

Maps


Source maps

Inundation maps (need to know flow depths, current velocities, factors that control inundation and behavior as tsunami comes onshore; roughness, morphology; subsidence; erosional/depositional aspects)

Probabilities

Recurrence (need paleo record, interpretation of deposits, and to know quality of written historical record; tide gage records)

Magnitude (segmentation)

Maximum credible event

Priorities for data acquisition?

Better paleo records, with USGS focus on Aleutians, Alaska, coastal California.

Can we data-mine tide gage records with good results?
USGS role/priorities: better models for landslide-generated tsunamis. State of the art best estimates for sources of tsunami generation.

Sam: PTHA should be an essential component of coastal and marine spatial planning (MSP); see Obama’s White House Center for Environmental Quality’s National Ocean Policy (it’s on the web at http://www.whitehouse.gov/administration/eop/oceans/policy)


Report from Earthquake breakout group: (Peter Haeussler presented their writeup)

Jody Bourgeois: presenting the tsunami breakout group’s summary in different format (being reworked by Jody, Stephanie, and me on Wednesday evening).

  1. Generation – understand/characterize tsunami sources, identify and map active faults and potential landslides. Better understanding seismogenesis (splay faults etc.), and landslide genesis.

  2. Propagation – realm of NOAA

  3. Inundation/behavior – impact of roughness on tsunamis, impact of complex bathy/topo on tsunamis (need better bathymetry), reconstruction of tsunami behavior. Inundation (distance inland), water depth, current velocity in harbors etc., erosion and deposition. Much based on sedimentological and geomorphic records.

  4. Recurrence – Catalogs: limited in most areas of concern in US and territories, need paleo-records. Historically more complete in some far field areas (Japan). Paleo record in some far-field areas developed (Japan, Kamchatska). Paleo-tsunami records: Cascadia moderately well studied, but not so much about inundation, etc. Alaska-Aleutians high priority, affect many sites both locally and distally. California high priority. Hawaii? Antilles?

Didn’t cover co-seismic deformation.
Jody making spreadsheet on Wed. evening about partners, needs, and priorities for each of the following: generation, inundation/behavior, and recurrence.
Offshore record of tsunamis, in turbidites? Jody not convinced you can get useful record that way. Chris G mentions someone’s study of Santorini, Mediterranean, but Jody doesn’t think it’s reliable. Paleoseismology studies could benefit from offshore record (turbidites generated by earthquakes), but not tsunamis, although it would be interesting to try to correlate the inferred earthquakes with tsunamis inferred from the onland sedimentary record.

Using tsunami deposits to reconstruct size and characteristics. Are there deposits offshore from tsunami return flows? One was documented after Sumatra, but thought it wouldn’t stay recognizable for very long. Guy Gelfenbaum: There was a big bar offshore Sumatra after 2004 that was deeper than you’d expect from most storms (12 m); a model also showed a bar forming in the same place where they found one. It disappeared in less than two years.

Classic problems of trying to distinguish tsunami deposits from other causes (storms, e.g.).
Jason Chaytor: landslide breakout group report.

Retrogressive failures, where slides fail headward of where they failed previously.

Roy Hyndman: Could we engineer slides deliberately to instrument and study them? Bill Normark did this in Lake Superior, no plans to try this further now.

Thursday, March 3

Morning session

Steve Kirby: Segment or area prioritization discussion.

Justification priorities (scientific, and impacts)

USGS programs and science centers involved

Partnerships (federal, states, universities, NSF GeoPrisms, NSF US Array, NSF Earthscope, FWS, NOAA, AVO, UAF).
John Haines: caution not to prioritize margin segments by where we want to work; where we actually end up doing work is driven by things outside our control. Breakout groups should discuss how we set priorities and go from there in terms of scientific justification and impacts. Pick the “no brainer” areas.

Roland von Huene: Until we understand 1946 earthquake, we really don’t understand much of what happens along the Aleutian arc.

Steve Kirby: along to Fox Islands too – source of 1957 quake. Aimed more directly at Hawaii than 1946.

Jason Chaytor: specifically didn’t talk about specific sites to study landslides because don’t want to limit choices. Want to understand how they happen, goal is to find natural laboratory but need to look at those features more because they can decide on that. Limiting themselves geographically right now might not be a good idea.

Steve Kirby suggests at least their landslide group should provide some criteria for site selection.

Carl Mortensen: among partnerships, we should refer to FEMA in particular. Written report should address how these scientific results will focus these kinds of partnerships.

Dave Scholl: A national marine geohazards program needs to cut to the chase. Where have big hazards happen and why? 1946 did happen, was somewhat anomalous and we need to understand why. Was there a landslide associated with that? [asked to Emile Okal].

Emile answered later in breakout group that there’s only circumstantial evidence that 1946 may have included a landslide. Can’t reconcile tsunami runup heights with what they know about the earthquake or can model using any kind of earthquake source (may require landslide explanation, then). Fishermen interviewed reported finding at shelf edge the bottom “disappeared”. Emile, Costas Synolakis and George Plafker (in paper, he thinks in BSSA) tried comparing before/after surveys but they weren’t done in exactly the same place. They didn’t feel they could confidently interpret the before and after surveys into paper, just anectodal story. Far field works, you need a very large earthquake and don’t need landslide.

Moving discussion away from these details, save them for breakout group.

Carolyn Ruppell: it’s not clear to me that the science has been decided. Why discuss areas now in so much detail?

Jody Bourgeois: broader impacts have to be important. They’re 50% of NSF, and their mission is not to help people, technically. We have to develop scientific questions, but focusing on those that affect the communities that use our products. Must do societally relevant science and our report has to reflect that.

Emile Okal: sounds like we really don’t need to break out to have this discussion.



Tsunami breakout group again:

Roy Hyndman presents approach to probabilistic tsunami hazard analysis, through either empirical approach or deterministic approach.

Emipirical: event catalog per site, robust statistics and all sources. Assume maximum event. Analyze magnitude-frequency relation, get return periods of X meters+.

Deterministic (Eric Geist disputes that terminology): identify sources and source parameters. For each possible event, model wave heights at coastal sites, estimate probability. Integrate results, get return periods of X meters+.

Example: Australia (Geoscience Australia) made PTHA maps as a national product, using Sunda Arc subduction zone as source. Burbridge et al., 2008. How was it received?

The Atlas of Canada, by Natural Resources Canada, made tsunami hazard map for Pacific coast assuming Cascadia area near-field sources – on Cascadia megathrust or Queen Charlotte Fault, crustal submarine earthquakes, delta-front landslides, fjord landslides, offshore landslides. Checked for historical tsunamis, modeling, and frequency – many blanks left in the frequency estimates. Far-field source catalog- many potential sources, and included empirical approach using tide gage data (Tofino tide gage on Vancouver Island). They also estimated landslide frequency for landslides on eastern Canada coast including Grand Banks. And Arctic/Beaufort: have landslides, potentially tsunamigenic, but almost no data. Canada is at risk from tsunamis in 3 oceans and very little of the tsunami hazard has been quantified.

Eric Geist: The Australia national map was designed using policy considerations rather than scientific. The concept of a national probability map is meaningless unless geared toward specific users: building codes, FEMA, etc.

Jody: one can still think of maps as an ultimate goal, and what does one need to get there? At least go get the kinds of data and modeling that you could use to produce probabilistic hazard maps eventually.

Eric: may be better to focus on probabilistic maps rather than a demonstration scenario or education.

Eric’s Seaside case study (FEMA-funded pilot study) used sources all the way from Japan up to Kamchatka, Alaska, and then Chile. Recurrence intervals from the earthquakes. They were mainly interested in 100-yr flood for flood-insurance rate maps, and there really wasn’t much danger from that from tsunamis because big coastal dunes would have blocked it. Had very detailed bathymetry. Bruce Jaffe took a lot of cores up there, looked at the deposits around Seaside. NOAA-PMEL did the computation for propagation model. Costas Synolakis provided engineering info, calculating impact on structures, etc.

National Research Council report recommended that USGS have responsibility for producing repeated (periodic) national tsunami source assessments. USGS has submitted response but have not received feedback yet. Would be extremely costly.
Next 5-10 years, what should focus be?

In generation – see Jody’s spreadsheet. Mapping distribution and chronology of paleotsunami deposits.

Low-frequency events of course eventually happen.

Navy does care about tsunami risk. They were, e.g., concerned about ships in Persian Gulf when the 2004 Indian Ocean tsunami happened, and they also get called on to provide humanitarian response (Guy: should also be concerned about, e.g., submarines in Puget Sound). They also contacted Jody asking about large earthquakes in Kamchatka. Also affects global (socioeconomic, then political) stability.


Our USGS priority should be to do the more general science in wider geographic region (more thorough paleotsunami analysis, cataloging, etc.) rather than focusing on more extremely detailed case studies like Seaside again (which worked well, as proof of concept) or scenarios like multi-hazard demonstration project. The scenarios are driven by the science.

Guy priority: looking in detail at what affects inundation once you get into the coastal zone.

Eric priority: there’s more work to be done on generation, but mostly in seismology realm. Eric more interested in statistically testing assumptions that would go into those PTHA maps. Testing against observations.
Fundamental science questions: why tsunamigenic earthquakes, and what controls M8+ earthquakes? Dynamics of landslides that produce tsunamis? Understanding links between tsunami processes and deposits. Estimating tsunami recurrence and magnitude from paleotsunami studies to support scenario development and statistical analyses.
During Jody’s presentation and discussion with whole group:

Chris Goldfinger asked about source definition: what are specifics of earthquake or landslide source? Getting caught in the gap now between the earthquake group and tsunami group discussions.

Emile Okal: two Indonesia tsunamis get “failing grades” from him. Not only scientists but also local communities don’t understand some fundamental things about the earthquake sources and how tsunamis then behave.

Guy Gelfenbaum: defending studies of tsunami behavior when it comes onshore. Not all of a tsunami’s effects are dictated by the source characteristics, it is very important what roughness, etc. is going on where tsunami comes onshore.


Jason Chaytor’s landslide breakout group list.

Priorities: continue/strengthen mapping, sampling, dating nationwide (includes reevaluating legacy data). Identify/establish “natural lab” to establish process-based approach – locations with record of past failures, are continuing hazard, have ongoing sedimentation/slope modification, logistically accessible, manageable scale, e.g., fjords like Seward or Valdez, Puget Sound, deltas like Mississippi, canyons (Monterey), tectonically active regions. Laboratory and theoretical studies on initiation and evolution/mobility of slides. And synthesis of existing data on fluid migration and underground blowouts. Partners? Everybody, including connections with people doing subaerial studies, MBARI, international collaboration.



Critical technologies breakout groups (#2, 3, and 4 merged)

Sediment sampling and analysis, tsunami modeling for coastal sites, and slope stability.
What are critical lab and field capabilities that we need?

We are thin on basic expertise and personnel

People are a core need, and we are stretched thin on that. What core expertise do we need to maintain to adequately address marine hazards?
What do we do well now?
Sedimentology, stratigraphy: grain size (USGS has laser particle size analyzers, Sedigraph, settling tubes in sed lab, much of staffing is contract-based), photography, microfossils (several people (USGS Seattle, Menlo), diatoms; Menlo Park, forams; partners with paleoceanographers at, e.g., WHOI) – micro and macrofossil analysis depends on individual people rather than major lab capabilities. We don’t have cam-sizer (photographic methods) for coarser samples. Digital photograph grain-size analysis for medium sand up to cobbles (CobbleCam, “beachball”, etc. for field sampling).

USGS also has expertise in comparing storm vs. tsunami deposits (both descriptive sampling and interpretation, and modeling).

USGS has wireline coring capability from drill rig truck (onshore).

We don’t have Geoslicer capability (like thin box core 10s of m long, similar to onshore vibracoring).

Paleomagnetics: USGS does have capabilities; could be useful to paleoseismology from turbidites if there’s a long enough record.

USGS has tephra labs (Menlo, Alaska) and thermoluminescence (Denver). Electron microprobe for volcanic glasses (Menlo, and UAF). For tephra analysis (grain size, mineralogy, microprobe; some whole-rock ICPMS and XRF are v. valuable, especially with active Alaska volcanoes), techniques are well established and USGS has the right facilities, but lacks enough personnel time, understaffed.


Geotechnical, geomechanical: quantitative analysis of deformation and failure processes (applied to slope failure). Link to predicting ground motions. This bridges the gap between failure and the turbidite record – linking shallower to deepwater processes (source to sink).

Now work mainly on fault gouges (poorly consolidated granular materials), but could be extended to work on other types of sediment.

Borehole geophysics: USGS has some expertise in pore pressure measurements in hard rock but would need to partner with others to measure/ monitor pore pressure in soft rock as part of natural-laboratory studies.

We also have expertise in acquisition and interpretation of geophysical borehole logs, heat flow (to diagnose fluid migration), and in-situ stress measurements, but would need to partner with others for similar efforts in soft sediments (either open or, probably, cased holes).

USGS Menlo Park does rock mechanics, rheology of poorly consolidated sediment especially clay-rich materials, re: landslides. Mineralogy, mineral transformation effects on strain, permeability, resistivity related to mechanical behavior.

Coring technology. Our traditional coring capabilities are somewhat dated, especially in the west here (personnel issue). Deep sampling in marine environment is important (for fjords, etc.) and is complicated; has been done in our team before but not as much recently. USGS has one multi-sensor core logger (MST), but we need partners to do many analyses on cores (X-ray ($250k system), XRF), we now look outside USGS for those needs. Maintaining those and core repositories can be costly; USGS Woods Hole cost-shares it with WHOI. Western region have 2 refrigerated core storage spaces. Our program has not focused on long, deep core collection for years, but could change depending on future goals. Nationally, core storage facilities have decreased/declined; consolidation and focus on partnerships with academic institutions is key. USGS Coastal and marine program has personnel/capabilities to recover/collect cores, but less so with processing. USGS coring equipment now is functional but limited in high-tech coring platforms, e.g., vibracoring, long piston coring, ROV-based coring, which MBARI does. Semi-autonomous seafloor drilling can be rented – state-of-the-art platform rig down to 6000 m (applied in Japan, e.g., estimate $10s of thousands per day).


What water depths and core lengths can current USGS technology collect? From surface ships, we can get 10-m piston cores or gravity cores in full ocean depths (4000-5000 m), or vibracoring in shelf water depths (3-4 m length). What do we need to be able to get (depths and core lengths) to adequately study marine deposits of interest to hazard studies? It makes sense to rent equipment including coring platforms if USGS does not use it often, and there are platforms that can be rented or partner-shared.

For some studies, very helpful to have ROV use for better positional accuracy to determine coring locations, in addition to surface ship sampling/ piston coring after collecting seismic data. Partnerships are key (MBARI, e.g.), with mutual objectives, rather than having USGS maintain our own ROV capability. We do a lot of work using ships of opportunity now, so may be limited by what capabilities exist on those ships.


Equipment for core analysis: we are now benefiting from academic partners (MST = density, high res photography, p-wave velocity, magnetic susceptibility, electrical resistivity). USGS can do physical properties on cores (MST in Menlo). We can’t x-ray or do other analyses such as XRF chemical analysis (if we want those done, we go to WHOI or CGS in Halifax, or Scripps). State-of-the-art now is CT scanning of cores to get sedimentary structure, e.g., to know extent of turbidites, to correlate with physical properties (we don’t do that; OSU can at veterinary school for low cost, WHOI can do some although cost is often prohibitive).
Landslide deposit analysis and dating at good resolution from top of deposit, often multiple layers. Multibeam data very useful in combination with seismic reflection. ROVs greatly facilitate specific sampling on landslide features.
Dating sediment samples – 14C, O isotopes tend to do outside USGS (e.g., WHOI has AMS facilities).

USGS does have gamma labs for 210Pb, 137Cs, 7Be – short-term studies. USGS has K-Ar, Ar-Ar facilities in Denver through Geology & Geophysics.


Priorities that USGS does not have expertise in: NEHRP. We partner with NSF-funded people to do a lot, it’s easier than it used to be to have USGS involved with NSF-funded programs.
Links to partners who have technologies we aren’t in a position to support well or don’t need to.
Eric’s input to tsunami modeling tools/capabilities:

Much of what tsunami modelers do involves sediment transport and sampling.

We run models that were developed by others (not USGS). NOAA PMEL does earthquake modeling, states make worst-case scenario maps. Jody: for near-field, the source is much more important than it is in far-field studies. USGS strengths should be in observation of the source and sediment transport, rather than developing the models themselves. We contract out model development. To use those models, though, you have to have very good nearshore bathymetry; NOAA is putting a lot of resources into getting good nearshore bathymetry for tsunami modeling. GEBCO data has a lot of artifacts in it, not great. Models are thought to perform fairly well (without sediment transport factored in – only gets modeled as a fluid, without sediment), but can only perform as well as the nearshore bathymetry that gets fed into the models.

Flume studies – could collaborate if needed with others who have them: CVO, SAFL. USGS have lots of expertise in fault dynamics and earthquake sources, almost none of which is applied to marine problems (thru EQ Hazards team). Oregon State Univ. has tsunami simulation tank; Japan also has a simulation facility at Ikeda’s dept, Tsukuba University and Tohoku University (Imamora is there), Japan tsunami research has strong links to engineering community.



US Army Corps of Engineers used to do tsunami simulation at lab in Vicksburg, but we don’t know whether they still do that there.


Download 157.74 Kb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page