張雙樓礦5.0Mta新井設(shè)計【含CAD圖紙+文檔】
張雙樓礦5.0Mta新井設(shè)計【含CAD圖紙+文檔】,含CAD圖紙+文檔,張雙樓礦,mta,設(shè)計,cad,圖紙,文檔
英文原文
Efficient mine microseismic monitoring
Maochen Ge
Pennsylvania State University, University Park, PA 16802, USA
Abstract:During the past 20 years, the microseismic technique has grown from a pure research means for rockburst study to a basic industrial tool for daily safety monitoring at rockburst-prone mines. This article examines the important issues for efficient mine microseismic monitoring programs. The key technical issues for such a program are discussed from three aspects: monitoring planning, data processing, and microseismic event location. An efficient monitoring program would be impossible without a firm commitment of the mine management. Issues related to the management and mine operations are discussed, including monitoring program integration, efficient use of microseismic data, and the benefit of monitoring programs for mine safety and productivity.
Keywords: Microseismic; Rockburst; Source location; Ground control; Mining
1. Introduction
Rockbursts and coal bumps are sudden and violent releases of energy stored in rock masses and geological structures. They have been a persistent threat to mine safety, causing catastrophic failures of mine openings, paralyzing mining operations, damaging mining equipment, and posing a severe safety threat to underground workers. In 1958, a rockburst at the Springhill Coal mine in Nova Scotia claimed 75 lives. In the U.S., a total of 100 rockburst-caused fatalities were reported in the last 60 years (Blake and Hedley, 2003).
The energy released by a rockburst can be staggering. In 1995, a rockburst with a local magnitude of 5.2 ML was recorded at the Solvay trona mine, Wyoming, when an entire 1000 m × 2000 m panel collapsed. The US coal mining industry has experienced bumps since the 1920s, with magnitudes up to 4.5 (Blake and Hedley, 2003).
The technique that is widely used for studying rockburst activities is the microseismic monitoring technique. The technique utilizes signals generated by the material to study fracture/failure processes. The real time monitoring capability of the microseismic technique, in terms of event source location, magnitude and source mechanisms, makes it an ideal tool for studying mine seismicity and related ground control problems.
The phenomenon of the emission of micro-level sounds by stressed rocks was first discovered in the late 1930s by two U.S. Bureau of Mines (USBM) researchers, Obert and Duvall, when they carried out sonic studies in a deep hard rock mine (Obert, 1975). In the early 1960s, South African researchers began to utilize this phenomenon to study the rockburst problem associated with deep gold mines (Cook, 1963). This early study convincingly demonstrated the feasibility of the rockburst location by the microseismic technique, the central element of mine microseismic monitoring.
In the middle of the 1960s, the USBM started a major research program in order to make the microseismic technique an efficient tool for mine safety monitoring. The hardware and software developed from this program, as well as the research and field tests carried out during this period, laid the foundation for the industrial use of the microseismic technique (Leighton and Blake, 1970 and Leighton and Duvall, 1972).
From the middle of the 1980s to the early 1990s, severe rockburst problems occurred spontaneously in Canadian mines. Over 20 rockburst-prone mines installed microseismic systems for daily monitoring purpose. From the late 1980s to the 1990s, large-scale rockburst research was carried out in Canada, sponsored by the Canadian federal government, the Ontario provincial government, and major mining companies. This research fundamentally changed the role of the microseismic technique in the Canadian mining industry. It is no longer a pure research tool, but the basic monitoring means for mine safety and ground control.
This article examines the important issues for efficient mine microseismic monitoring programs. The discussion is carried out from three aspects: monitoring planning, data processing, and microseismic event location. Although the focus of this paper is the technical issues, it is important to note that an efficient monitoring program would be impossible without a firm commitment from the mine management. For this reason, we will also discuss issues related to the management and mine operations, including monitoring program integration, efficient use of microseismic data, and the benefit of an efficient monitoring program for mine safety and productivity.
2. Planning and optimization of monitoring systems
Careful planning is the foundation for establishing an efficient monitoring program and has a profound impact on the system's long-term performance. There are three important issues to be resolved at this stage: engineering assessment of monitoring objective and monitoring condition; determination of the monitoring system size (number of channels); and optimization of the sensor array layout. Also, the harsh mining environment requires a rigorous maintenance program because monitoring systems degrade rapidly.
2.1 Engineering assessment of monitoring objective and monitoring condition
The first task at the planning stage is a thorough assessment of monitoring objectives, including target areas, monitoring accuracy, and associated monitoring conditions. Since mining is a dynamic process, this assessment should take into account both short-term and long-term monitoring needs.
In order to achieve this goal, a comprehensive analysis should be carried out on potential rockburst hazards in relation with the mining conditions, such as mining method, mine layout, ground control practice, mine development operations, geological materials and structures, and stress conditions at the mine site. As a result of this engineering assessment, the size of the monitoring system can be determined. This analysis should also yield useful information on feasible locations for sensor installation.
2.2. Using a large channel system
The number of channels needed depends on several factors. The most important ones are the size of the area to be covered, the location accuracy required, the signal level expected, and rock formations. An initial estimation may be made with the reference of the mines with the similar conditions.
It is always good practice to use a relatively large channel system. Why is a large channel system critical for daily monitoring programs? A simple answer to this question is that the efficiency of a microseismic system is first measured by its capability of catching enough signals. If a system has difficulty detecting the expected signals, the value of the system diminishes. This has been a major problem faced by the microseismic technique prior to the use of large channel systems.
The efficiency of large channel systems for signal detection is due to two mechanisms. First, with a large channel system, we effectively shorten the distances between a potential source and sensors. If we consider the fact that the energy of a microseismic event decays rapidly with the distance because of both attenuation and geometric spreading effects, shortening the signal travel distances is the only solution to the problem. Second, the emissions of microseismic signals are generally directional with significant variations in signal strength tied to direction. The only solution to this problem is to have sufficient sensors surrounding potential sources.
2.3. Sensor array design and optimization
Sensor array geometry refers to the configuration of the sensors to be used for event location. From a technical point of view, it is probably the most important factor affecting the monitoring accuracy and reliability. The fundamental importance of the sensor array geometry lies in the fact that it determines the stability of the source location system, or in other words, it determines the impact of initial errors on the location result. A good array effectively minimizes the impact of initial errors on the location result.
The array effect is shown by the density of the hyperbolic field associated with the array, which is an indication of the relative location accuracy for the array (Ge, 1988). It is clear from the figure that the location accuracy is best at the center of the array and rapidly decreases away from the array. The worst areas are those behind the sensors.
Since errors are inevitable for input data (such as arrival times, velocity and sensor coordinate), the source location accuracy depends greatly on the efficiency of reduction of the impact of these initial errors. Good array geometry is essential for the reliable and accurate source location. The particular importance of the sensor array is its long-term effect on daily monitoring programs, as the achievable monitoring accuracy at a mine site largely depends on the array used. As such, the sensor array design is the central task at the planning stage.
There are a number of important aspects which have to be carefully considered in the design process. The following is a brief discussion of these aspects.
2.3.1. Long- and short-term monitoring needs
Installation of an underground monitoring system is a very time-consuming and costly operation. In order to minimize the changes to be made at a later point, both the current and long-term monitoring needs have to be thoroughly assessed.
2.3.2. Field investigation
During the planning stage, the physical conditions at the potential sensor locations should be assessed. In addition to their accessibility, the sites should not be shielded by large openings or major discontinuities. Rocks at the mounting site should be competent and a good coupling effect can be achieved.
2.3.3. General planning
Mining is a dynamic process. The production and development activities are often carried out at several different locations. In order to design an array that is efficient not only for the mine as whole but also for those specific areas, one has to understand those basic array effects. The following are several basic rules for general planning.
? Two-dimensional arrays should be avoided. This type of array gives very poor accuracy in its perpendicular direction.
? Special sensor pairs may be designed for reinforcement of coverage in certain directions at particular locations (
2.3.4. Simulation analysis
After a general plan is made, its effect may be further studied through a simulation analysis. The emphasis of the study should be the pattern of the location accuracy, not the individual numbers. The array can be fine-tuned as the result of this study.
2.3.5. Calibration study
Calibration studies should be regularly scheduled after the array is in place as they will provide the most reliable information on the monitoring accuracy as well as the effect of the sensor array. Both rockburst and blast data can be utilized for the purpose.
2.4. System maintenance
Practical experiences have shown that one of the most critical factors to keep a mine monitoring system at its best performance level is regular maintenance. Mines represent an extremely harsh environment for microseismic monitoring. Sensors and wires can be easily damaged by mining activities and falling rocks. Water and excessive moisture may cause the problem of malfunction for sensors. Fractured ground may significantly reduce signal strength at sensors. Local disturbances from mining, transportation, and ventilation may create high background noise. Any of these problems could severely affect the performance of monitoring systems.
3. Microseismic data processing
The microseismic data recorded at a mine site can be extremely complicated. This is usually due to the excessive background noise presented at the mine site. Microseismic signals are often partially or even completely swamped by noises, making it difficult to identify the actual arrival times of incoming signals.
“Clean” signals can also be very complicated. Some complications are due to other activities unrelated with the event under consideration. Furthermore, a good portion of these signals may be caused by S-wave arrivals instead of P-wave arrival as we would normally assume. If these signals are used without discrimination, it will result in significant contamination of the database. The Canadian experience of daily monitoring has shown that efficient monitoring is dependent on the ability to process microseismic data. In this section, we will discuss two important aspects of microseismic data processing: noise filtering and identification of arrival types.
3.1. Frequency analysis and data filtering
A primary task in data processing is to filter background noise. This requires a detailed study of the frequency distributions for both signals and noise. If the dominant frequency range for signals is different from that of noise, we may separate signals from background noise by using a set of required filters. The following is an example from a recent study by the author at a limestone mine, where the monitoring efficiency of the microseismic system was severely affected by background noise.
A detailed study was carried out on the characteristics of microseismic signals and noise, including a manual inspection of all waveforms from the database, frequency analysis of representative waveforms, and case testing. It was determined that the microseismic signals at the mine site were primarily confined in the range of 10–200 Hz, with a dominant frequency spectrum of 10–130 Hz. Three typical noise types were identified, which were high-frequency noises (>200 Hz) due to various onsite mining activities, low-frequency and cyclic noises (<10 Hz) caused by distance machinery activities, and electrical activity noise at 60 Hz.
3.2. Identify physical status of arrivals
In addition to noise elimination, another important task in data processing is the identification of arrival types. The first arrival detected by a sensor is not necessarily due to a P-wave as it has been assumed for most microseismic studies. It is a much more complicated phenomenon. In addition to P-waves, first arrivals may be due to S-waves, or even outliers. Outliers are those arrivals which are not due to the physical source triggering the majority of the stations during an event time window.
The importance of being able to identify these arrivals relies on two facts. First, it would introduce significant and systematical errors into our database if these arrivals are mixed and assumed to be P-wave arrivals. Second, S-wave and outlier triggering are not rare or isolated events.
This figure shows the distribution of arrival type picks as a function of triggering sequence for 434 events from a mine's database. S-wave arrivals account for 41% of the total picks and outliers are about 10%. If the P-wave arrival assumption were used for these S-wave arrivals and noises, the input data for event location would be severely contaminated. In fact, this is the single most important problem responsible for the poor performance of many daily monitoring systems in the early days.
This technique provides a unique means of identifying arrival types and has been adopted by many mines.
4. Criteria on selection of source location code
The basic function of a daily mine microseismic system is to delineate the locations of rockbursts and the associated microseismic activity. Its efficiency is largely measured by the accuracy and reliability of event locations. In this regard, choosing a suitable source location code is critical for an efficient monitoring program.
Source location is a very broad subject. There are many different approaches and methods. The discussion here is limited to how to choose a suitable method for the daily monitoring purpose at mines. For more information, readers may refer to the author's two recent articles (Ge, 2003a and Ge, 2003b), which provide the detailed discussion of various major methods used in seismology, microseismic monitoring, and acoustic emission, including the triaxial sensor approach.
4.1. Convergence character of searching algorithms
One of the major considerations in selecting the source location method is the convergence character of searching algorithms. The convergence character here refers to the stability of the solution searching process carried out iteratively. If a searching algorithm has a poor convergence character, it is usually prone to the divergence problem. When this happens, the location searching process is in effect stopped, either in the form of oscillation or causing a system breakdown. For a daily monitoring program, this would be an intolerant problem as it is impossible for a manual analysis of hundreds of daily recorded events.
Currently, there are three typical algorithms used for mine microseismic monitoring: the USBM method, the Geiger's method and the Simplex method. The USBM method is a widely used mine-oriented source location method, developed by the U.S. Bureau of Mines' researchers in the early 1970s (Leighton and Blake, 1970 and Leighton and Duvall, 1972). The method is simple and easy to use, and, because of its non-iterative algorithm, has no divergence problem. However, the method is severely limited for the daily monitoring purpose because it cannot simultaneously handle P- and S-wave arrivals.
Geiger's method, developed at the beginning of the last century (Geiger, 1910 and Geiger, 1912), is the best known and most widely used source location method. In seismology, it is used almost universally for local earthquake locations. The algorithm is efficient and flexible, but prone to the divergence problem and, therefore, not suitable for the daily monitoring purpose.
The Simplex method developed by Nelder and Mead (1965) searches the minimums of mathematical functions through function comparison. The method was introduced for source location purpose in late 1980s by Prugger and Gendzwill (Prugger and Gendzwill, 1989 and Gendzwill and Prugger, 1989). The mathematical procedures and related concepts in error estimation for this method were further discussed by Ge (1995). The most important advantage of the Simplex method over the other popularly used iterative algorithms is its robust convergence character. Divergence is essentially not an issue for this method. This advantage, together with its efficiency and flexibility, makes the method the top choice for daily microseismic monitoring purposes.
4.2. Using P- and S-wave arrivals simultaneously
The ability to use P- and S-wave arrivals simultaneously has a number of important implications for accurate event location. First, it allows an efficient use of the available data. An important phenomenon frequently observed in both seismology and microseismic monitoring is the higher amplitude for S-wave arrivals. In many cases, we may only see S-wave arrivals, instead of P-wave arrivals. Fig. 4 is a clear demonstration of this phenomenon.
Second, most microseismic events are very small, with only five or six arrivals. If the location code can only use P-wave velocity and there are S-wave arrivals, we then have to discard the S-wave data or use the P-wave arrival assumption for all arrivals. As it was discussed earlier, neither of these approaches is acceptable. Using the P-wave assumption will incur large location errors, and discarding S-wave arrivals may severely limit the sensor array and make the solution vulnerable. The other important advantage for using P- and S-wave arrivals simultaneously is that it introduces a new error control mechanism for improving source location accuracy.
4.3. Optimization method
Accurate source location depends greatly on our ability to limit the impact of initial errors. There are two principal approaches, array and data optimization. The importance of array optimization, as discussed earlier, is to create a stable mathematical system which will not be overly sensitive to
收藏