ABSTRACT
Friction Stir Welding (FSW) is a solid state joining process that involves joining of metals without fusion or filler materials. The frictional heat is produced from a rapidly rotating non-consumable high strength tool pin that extends from a cylindrical shoulder. The process is particularly applicable for aluminium alloys but can be extended to other products also. Plates, sheets and hollow pipes can be welded by this method. The process is also suitable for automation. The weld produced is of finer microstructure and superior in characteristics to that parent metal. FSW finds application in shipbuilding, aerospace, railway, electrical and automotive industry. The limitations of FSW are reduced by intensive research and development. Its cost effectiveness and ability to weld dissimilar metals makes it a commonly used welding process in recent times.
CONTENTS
1. INTRODUCTION
2. WORKING PRINCIPLE
3. DESCRIPTION OF THE ROTATING TOOL PIN
4. MICROSTRUCTURE CLASSIFICATION
5. FACTORS AFFECTING WELD QUALITY
6. MATERIAL SUITABILITY
7. OTHER MATERIALS
8. JOINT GEOMETRICS
9. FSW OF MILD STEEL
10. FRICTION STIR WELDING MACHINES
11. ADVANTAGES OF FSW
12. APPLICATIONS OF FSW
13. LIMITATIONS OF FSW
14. RETRACTABLE PIN TOOL
15. FSW EQUIPMENT MANUFACTURERS
16. AREAS OF ACTIVE DEVELOPMENT AND RESEARCH
17. CONCLUSION
18. BIBLIOGRAPHY
1. Introduction
In late 1991 a very novel and potentially world beating welding method was conceived at TWI. The process was duly named friction stir welding (FSW), and TWI filed for world-wide patent protection in December of that year. TWI (The Welding Institute) is a world famous institute in the UK that specializes in materials joining technology. Consistent with the more conventional methods of friction welding, which have been practiced since the early 1950s, the weld is made in the solid phase, that is, no melting is involved. Compared to conventional friction welding, FSW uses a rotating tool to generate the necessary heat for the process. Since its invention, the process has received world-wide attention and today two Scandinavian companies are using the technology in production, particularly for joining aluminium alloys. Also, FSW is a process that can be automated. It is also a cleaner and more efficient process compared to conventional techniques.
2. Working principle
In friction stir welding (FSW) a cylindrical, shouldered tool with a profiled probe is rotated and slowly plunged into the joint line between two pieces butted together. The parts have to be clamped onto a backing bar in a manner that prevents the abutting joint faces from being forced apart. Frictional heat is generated between the wear resistant welding tool and the material of the work pieces. This heat causes the latter to soften without reaching the melting point and allows traversing of the tool along the weld line. The maximum temperature reached is of the order of 0.8 of the melting temperature of the material. The plasticized material is transferred from the leading edge of the tool to the trailing edge of the tool probe and is forged by the intimate contact of the tool shoulder and the pin profile. It leaves a solid phase bond between the two pieces. The process can be regarded as a solid phase keyhole welding technique since a hole to accommodate the probe is generated, then filled during the welding sequence
3. Description of the rotating tool pin
The non-consumable tool has a circular section except at the end where there is a threaded probe or more complicated flute; the junction between the cylindrical portion and the probe is known as the shoulder. The probe penetrates the work piece whereas the shoulder rubs with the top surface. The tool has an end tap of 5 in 6 mm diameter and a height of 5 to 6 mm (may vary with the metal thickness). The tool is set in a positive angle of some degree in the welding direction. The design of the pin and shoulder assembly plays a major role on how the material moves during the process.
4. Microstructure Classification
The first attempt at classifying microstructures was made by P L Threadgill (Bulletin, March 1997). This work was based solely on information available from aluminium alloys. However, it has become evident from work on other materials that the behavior of aluminium alloys is not typical of most metallic materials, and therefore the scheme cannot be broadened to encompass all materials. It is therefore proposed that the following revised scheme is used. This has been developed at TWI, but has been discussed with a number of appropriate people in industry and academia, and has also been provisionally accepted by the Friction Stir Welding Licensees Association. The system divides the weld zone into distinct regions as follows:
A. Unaffected material
B. Heat affected zone (HAZ)
C. Thermo-mechanically affected zone (TMAZ)
D. Weld nugget (Part of thermo-mechanically affected zone)
Unaffected material or parent metal: This is material remote from the weld, which has not been deformed, and which although it may have experienced a thermal cycle from the weld is not affected by the heat in terms of microstructure or mechanical properties.
Heat affected zone (HAZ): In this region, which clearly will lie closer to the weld centre, the material has experienced a thermal cycle, which has modified the microstructure and/or the mechanical properties. However, there is no plastic deformation occurring in this area. In the previous system, this was referred to as the "thermally affected zone". The term heat affected zone is now preferred, as this is a direct parallel with the heat affected zone in other thermal processes, and there is little justification for a separate name.
Thermo-mechanically affected zone (TMAZ): In this region, the material has been plastically deformed by the friction stir welding tool, and the heat from the process will also have exerted some influence on the material. In the case of aluminium, it is possible to get significant plastic strain without recrystallisation in this region, and there is generally a distinct boundary between the recrystallised zone and the deformed zones of the TMAZ. In the earlier classification, these two sub-zones were treated as distinct microstructural regions. However, subsequent work on other materials has shown that aluminium behaves in a different manner to most other materials, in that it can be extensively deformed at high temperature without recrystallisation. In other materials, the distinct recrystallised region (the nugget) is absent, and the whole of the TMAZ appears to be recrystallised.
Weld Nugget: The recrystallised area in the TMAZ in aluminium alloys has traditionally been called the nugget. Although this term is descriptive, it is not very scientific. However, its use has become widespread, and as there is no word which is equally simple with greater scientific merit, this term has been adopted. A schematic diagram is shown in the above Figure which clearly identifies the various regions. It has been suggested that the area immediately below the tool shoulder (which is clearly part of the TMAZ) should be given a separate category, as the grain structure is often different here. The microstructure here is determined by rubbing by the rear face of the shoulder, and the material may have cooled below its maximum. It is suggested that this area is treated as a separate sub-zone of the TMAZ.
5. Factors affecting weld quality
• Type of metal
• Angle of tool
• Traversing speed of the tool
• Spinning speed of tool
• Pressure applied by the pin tool
Research is going on to combine the above factors in order to control the process in a better way.
6. Material suitability
TWI has concentrated most of its efforts to optimizing the process for the joining of aluminium and its alloys. Subsequent studies have shown that cast to cast and cast to extruded (wrought) combinations in similar and dissimilar aluminium alloys are equally possible. The following aluminium alloys could be successfully welded to yield reproducible high integrity welds within defined parametric tolerances:
• 2000 series aluminium (Al-Cu)
• 3000 series aluminium (Al-Mn)
• 4000 series aluminium (Al-Si)
• 5000 series aluminium (Al-Mg)
• 6000 series aluminium (Al-Mg-Si)
• 7000 series aluminium (Al-Zn)
• 8000 series aluminium (Al-Li)
7. Other materials
The technology of friction stir welding has been extended to other materials also, on which researches are going on . Some of them are as follows-
• Copper and its alloys
• Lead
• Titanium and its alloys
• Magnesium and its alloys
• Zinc
• Plastics
• Mild steel
Companies practicing and developing FSW are spending a lot of money on improving its use for plastics. It has been demonstrated that FSW is a much more efficient and cleaner process than conventional adhesive bonding in plastics. But it is yet to be made cost and material effective. Ceramics is another field where FSW could be very useful in the future.
8. Joint Geometrics
FSW is independent of gravity. Hence, it can be used to weld in any position- vertical, horizontal and even annular. For this reason FSW has been used to make circumferential annular welds in fuel tanks for spaceships. Besides these FSW can also be utilized for normal fillet and corner welds and also double v-butt joints etc.
9. FSW of Mild Steel
Steel can be friction stir welded, but the essential problem is that tool materials wear rapidly. The sample becomes red hot during welding (as shown in the figure). Since the tool gets red hot it is necessary to protect it against the environment using a shielding gas. So generally FSW is avoided for mild steel. This is not such a great disadvantage since there are more efficient methods to weld mild steel. The weld shown is made by Hitachi of Japan.
10. Friction stir welding machines
10. 1 ESAB SuperStir TM machine FW28
The machine has a vacuum clamping table and can be used for non-linear joint lines.
• Sheet thickness: 1mm-25mm aluminium
• Work envelope: Approx 5 x 8 x 1m
• Maximum down force: Approx 60kN (6t)
• Maximum rotation speed: 5000rev/min :
•
10. 2 Modular machine FW22 to weld large size specimens
A laboratory machine was built in October 1996 to accommodate large sheets and to weld prototype structures. The modular construction of FW22 enables it to be easily enlarged for specimens with even larger dimensions.
• Sheet thickness: 3mm-15mm aluminium
• Maximum welding speed: up to 1.2m/min
• Current maximum sheet size: 3.4m length x 4m width
• Current maximum working height: 1.15m
•
10. 3 Moving gantry machine FW21
The purpose built friction stir welding machine FW21 was built in 1995. This machine uses a moving gantry, with which straight welds up to 2m long can be made. It was used to prove that welding conditions can be achieved which guarantee constant weld quality over the full length of long welds.
• Sheet thickness: 3mm-15mm aluminium
• Maximum welding speed: up to 1.0m/min
• Current maximum sheet size: 2m length x 1.2m width
10. 4 Heavy duty Friction Stir Welding machines FW18 and FW14
Two existing machines within TWI's Friction and Forge Welding Group have been modified exclusively to weld thick sections by FSW. The following thickness range has been experimentally investigated but the machines are not yet at their limits.
• Sheet thickness: 5mm-50mm aluminium from one side
10mm-100mm aluminium from two sides
5mm thick titanium from one side
• Power: up to 22kW
• Welding speed: up to 1m/min
10. 5 High rotation speed machine FW20
For welding thin aluminium sheets TWI equipped one of its existing machines with an air cooled high speed head which allows rotation speeds of up to 15,000rev/min.
• Sheet thickness: 1.2mm-12mm aluminium
• Maximum welding speed: up to 2.6m/min, infinitely variable
10. 6 Friction Stir Welding demonstrator FW16
TWI's small transportable machine produces annular welds with hexagonal aluminium alloy discs. It has been exhibited on fairs in USA, Sweden, Germany, and the United Kingdom in recent years. It is an eye catcher which enables visitors to produce their first friction stir weld themselves. It can be operated with 110V or 220V-240V and has been used by TWI and its member companies to demonstrate the process.
11. Advantages of FSW
• The process is environment friendly since no fumes or spatter is generated and no shielding gas is required.
• A non consumable tool is used
• Since the weld is obtained in solid phase, gravity does not play any part and hence the process can be done in all positions(vertical, horizontal, overhead or orbital)
• No grinding, brushing or pickling is required.
• Since the temperature involved in the process is quite low, shrinkage during solidification is less
• One tool can be typically used for up to 1000 metres of weld length (6000 series aluminium alloy)
• No fusion or filler materials is required
• No oxide removal necessary as in fusion welding.
• The weld obtained is of superior quality with excellent mechanical properties and fine micro structure.
• The process is cost effective since mechanical forming after welding can be avoided
• Dissimilar metals can be welded.
• Automation is possible
12. Applications of FSW
12. 1 Shipbuilding and marine industries
The shipbuilding and marine industries are two of the first industry sectors which have adopted the process for commercial applications. The process is suitable for the following applications:
• Panels for decks, sides, bulkheads and floors
• Aluminium extrusions
• Hulls and superstructures
• Helicopter landing platforms
• Offshore accommodation
• Marine and transport structures
• Masts and booms, e.g. for sailing boats
• Refrigeration plant
12. 2 Aerospace industry
At present the aerospace industry is welding prototype parts by friction stir welding. Opportunities exist to weld skins to spars, ribs, and stringers for use in military and civilian aircraft. This offers significant advantages compared to riveting and machining from solid, such as reduced manufacturing costs and weight savings. Longitudinal butt welds and circumferential lap welds of Al alloy fuel tanks for space vehicles have been friction stir welded and successfully tested. The process could also be used to increase the size of commercially available sheets by welding them before forming. The friction stir welding process can therefore be considered for:
• Wings, fuselages, empennages
• Cryogenic fuel tanks for space vehicles
• Aviation fuel tanks
• External throw away tanks for military aircraft
• Military and scientific rockets
• Repair of faulty MIG welds
12. 3 Railway industry
The commercial production of high speed trains made from aluminium extrusions which may be joined by friction stir welding has been published. Applications include:
• High speed trains
• Rolling stock of railways, underground carriages, trams
• Railway tankers and goods wagons
• Container bodies
12. 4 Land transportation
The friction stir welding process is currently being experimentally assessed by several automotive companies and suppliers to this industrial sector for its commercial application.. Potential applications are:
• Engine and chassis cradles
• Wheel rims
• Attachments to hydro formed tubes
• Tailored blanks, e.g. welding of different sheet thicknesses
• Space frames, e.g. welding extruded tubes to cast nodes
• Truck bodies
• Tail lifts for lorries
• Mobile cranes
• Armour plate vehicles
• Fuel tankers
• Caravans
• Buses and airfield transportation vehicles
• Motorcycle and bicycle frames
• Articulated lifts and personnel bridges
• Skips
• Repair of aluminium cars
• Magnesium and magnesium/aluminium joints
12. 5 Construction industry
The use of portable FSW equipment is possible for:
• Aluminium bridges
• Facade panels made from aluminium, copper or titanium
• Window frames
• Aluminium pipelines
• Aluminium reactors for power plants and the chemical industry
• Heat exchangers and air conditioners
• Pipe fabrication
12. 6 Electrical industry
The electrical industry shows increasing interest in the application of friction stir welding for:
• Electric motor housings
• Busbars
• Electrical connectors
• Encapsulation of electronics
12.7 Other industry sectors
Friction stir welding can also be considered for:
• Refrigeration panels
• Cooking equipment and kitchens and furniture
• Gas tanks and gas cylinders, connecting of aluminium or copper coils in rolling mills
13. Limitations
• Welding speeds are moderately slower
• Work pieces must be rigidly clamped
• Backing bar required
• Keyhole at the end of each weld
• Requirement of different length pin tools when
• welding materials of varying thickness
Hole at the end of FSW
14. Retractable pin tool
Two major drawbacks of FSW is the requirement for different length pin tools when welding materials of varying thickness and a keyhole at the end of the weld may be overcome with the help of a retractable pin tool developed by NASA. The automatic retractable pin tool uses a computer controlled motor to automatically retract the pin into the shoulder of the tool at the end of the weld preventing keyholes. This design allows the pin angle and length to be adjusted for changes in material thickness.
15. FSW equipment manufacturers
Some of the manufacturers of friction stir welding machines are:
• Friction stir welding link, U.S.A
• General tool company, U.S.A
• Hitachi limited, Japan
• Smart technology limited, U.K
16. Areas of active development and research
• Development of new tool design
• Use of process at higher speeds
• Research in the use of other materials
• Investigation of fundamental characteristics of FSW created joints
17. Conclusion
Such has been the interest in FSW, which was patented not so long ago that considerable effort is being made in transferring the technological benefits from aluminium to other materials. Efforts are on to make the process more flexible. In the new millennium there is no doubt that the automotive sector will find an increasing number of uses for this process as its cost effectiveness and ability to weld dissimilar material combinations with minimal distortion is more widely appreciated. The process has been an excellent substitute for alloys that have inherent fusion welding problems.
18. Bibliography
Friction stir welding
-University of Cambridge, H.K.D.H Bhadesia.
TWI world centre for materials joining technology -
-Friction stir welding at TWI, Stephan Kallee and Dave Nicholas.
Friction stir welding
-An improved way to join metals, William Palmer.
Saturday, August 7, 2010
FLUID FLOW VISUALIZATION
FLUID FLOW VISUALIZATION
Abstract
The flow of air cannot be seen by the naked eye. The flow of water can be seen but not its streamlines or velocity distribution. The consolidated science which analyses the behavior of fluid invisible to the eye like this as image information is called ‘flow visualization’, and it is extremely useful for clarifying fluid phenomena. The saying ‘seeing is believing’ most aptly expresses the importance of flow visualization.
This report presents an overview of techniques for visualization of fluid flow data.
The popular techniques Temperature Sensitive Paint, Pressure Sensitive Paint, Tuft Method, Hydrogen Bubble, Optical Methods, and Particle Imaging Velocimetry are explained in details. The figures for various techniques have been presented at the end of this report.
This report mainly concentrated on the experimental fluid flow visualization. Still important computer aided visualization methods have summarized, like the particle imaging Velocimetry. A summary of all the important techniques have been presented at the end of the report. Introduction to the computer graphics flow visualization have been made so that the reader can have a basic idea of the techniques of graphics visualization.
Purposes and Problems of Flow Visualization
Flow visualization probably exists as long as fluid flow researches itself. Until recently,
Experimental flow visualization has been the main visualization aid in fluid flow research. Experimental flow visualization techniques are applied for several reasons:
• To get an impression of fluid flow around a scale model of a real object, without any
Calculations;
• As a source of inspiration for the development of new and better theories of fluid flow;
• To verify a new theory or model.
Though used extensively, these methods suffer from some problems. A fluid flow is often affected by the experimental technique, and not all fluid flow phenomena or relevant parameters can be visualized with experimental techniques. Also, the construction of small scale physical models and experimental equipment such as wind tunnels are expensive, and experiments are time consuming.
Recently a new type of visualization has emerged: computer-aided visualization. The increase of computational power has led to an increasing use of computers for numerical simulations. In the area of fluid dynamics, computers are extensively used to calculate velocity fields and other flow quantities, using numerical techniques to solve the governing Navier-Stokes equations. This has led to the emergence of Computational Fluid Dynamics (CFD) as a new field of research and practice.
To analyze the results of the complex calculations, computer visualization techniques are necessary. Humans are capable of understanding much more information when it is shown visually, rather than numerically. By using the computer not only for calculating the numerical data, but also for visualizing these data in an understandable way, the benefits of the increasing computational power are much greater.
The visualization of fluid flow simulation data may have several different purposes. One purpose is the verification of theoretical models in fundamental research. When a flow phenomenon is described by a model, this flow model should be compared with the ‘real’ fluid flow. The accuracy of the model can be verified by calculation and visualization of a flow with the model, and comparison of the results with experimental results. If the numerical results and the experimental flow are visualized in the same way, a qualitative verification by visual inspection can be very effective. Research in numerical methods for solving the flow equations can be supported. By visualizing the solutions found, but also by visualization of intermediate results during the iterative solution process.
Another purpose of fluid flow visualization is the analysis and evaluation of a design. For the design of a car, an aircraft, a harbor, or any other object that is functionally related with fluid flow, calculation and visualization of the fluid flow phenomena can be a powerful tool in design optimization and evaluation. In this type of applied research, communication of flow analysis results to others, including non-specialists, is important in the decision making process.
In practice, often both experimental and computer-aided visualization will be applied. Fluid flow visualization using computer graphics will be inspired by experimental visualization. Following the development of 3D flow solution techniques, there is especially an urgent need for visualization of 3D flow patterns. This presents many interesting but still unsolved problems to computer graphics research. Flow data are different in many respects from the objects and surfaces traditionally displayed by 3D computer graphics. New techniques are emerging for generating informative images of flow patterns; also, techniques are being developed to transform the flow visualization problem to display of traditional graphics primitives.
Experimental Flow Visualization
1. Pressure Sensitive Paint (psp) And Temperature Sensitive Paint (tsp)
The use of luminescent molecular probes for measuring surface temperature and pressure on wind tunnel models and flight vehicles offers the promise of enhanced spatial resolution and lower costs compared to traditional techniques. These new sensors are called temperature-sensitive paint (TSP) and pressure-sensitive paint (PSP).
Traditionally, arrays of thermocouples and pressure taps have been used to obtain surface temperature and pressure distributions. These techniques can be very labor-intensive and model/flight vehicle preparation costs are high when detailed maps of temperature and pressure are desired. Further, the spatial resolution is limited by the number of instrumentation locations chosen. By comparison, the TSP and PSP techniques provide a way to obtain simple, inexpensive, full-field measurements of temperature and pressure with much higher spatial resolution. Both TSP and PSP incorporate luminescent molecules in a paint which can be applied to any aerodynamic model surface. Figure 1 shows a schematic of a paint layer incorporating a luminescent molecule.
The paint layer is composed of luminescent molecules and a polymer binder material. The resulting ‘paint’ can be applied to a surface using a brush or sprayer. As the paint dries, the solvent evaporates and leaves behind a polymer matrix with luminescent molecules embedded in it. Light of the proper wavelength to excite the luminescent molecules in the paint is directed at the model and luminescent light of a longer wavelength is emitted by the molecules Using the proper filters, the excitation light and luminescent emission light can be separated and the intensity of the luminescent light can be determined using a photo detector. Through the photo-physical processes known as thermal- and oxygen quenching, the luminescent intensity of the paint emission is related to temperature or pressure. Hence, from the detected luminescent intensity, temperature and pressure can be determined. The polymer binder is an important ingredient of a luminescent paint used to adhere the paint to the surface of interest. In some cases, the polymer matrix is a passive anchor. In other cases, however, the polymer may affect significantly the photo physical behavior of the paint through a complicated interaction between the luminescent molecules and the macro-molecules of the polymer. A good polymer binder should be robust enough to sustain skin friction and other forces on the surface of an aerodynamic model. Also, it must be easy to apply and repair to the surface in a smooth, thin film.
For TSP, many commercially available resins and epoxies can be chosen serve as polymer binders if they are not oxygen permeable and do not degrade the activity of the luminophore molecules. In contrast, a good polymer binder for a PSP must have high oxygen permeability besides being robust and easy to apply.
The CCD camera system for luminescent paints is the most commonly used in aerodynamic testing. A schematic of this system is shown in Figure 2. The luminescent
Paint (TSP or PSP) is coated on the surface of the model. The paint is excited to
Luminance by the illumination source, such as a lamp or a laser. The luminescent intensity Image is filtered optically to eliminate the illuminating light and then captured by a CCD
Camera and transferred to a computer with a frame grabber board for image processing.
Both wind-on image (at temperature and pressure to be determined) and wind-off image
(At a known constant temperature and pressure) are obtained. The ratio between the wind on and wind-off images is taken after the dark current level image is subtracted from both
Images, yielding a relative luminescent intensity image. Using the calibration relations, the surface temperature and pressure distributions can be computed from the relative
Luminescent intensity image.
TSP has also been utilized as an approach to flow transition detection since convective heat transfer is much higher in turbulent flow than in laminar flow, TSP can visualize the surface temperature difference between turbulent and laminar regions. In low speed wind tunnel tests, the model is typically heated or cooled to enhance temperature variation across the transition line.
The PSP/TSP technique provides a promising tool for measuring surface pressure
distributions on a high-speed rotating blade at a high spatial resolution. Instrumentation is
Particularly difficult in the rotating environment and the pressure taps weaken the
Structure of the rotating blade. Recently, a test was performed to measure the chord wise pressure distributions on the rotor blades of a high speed axial flow compressor/ TSP (Ru(bpy)-Shellac) and PSP (Ru(ph2-phen) in GE RTV 118) were applied to alternating blades. The TSP provided the temperature distributions on the blades for
temperature correction of the PSP results. A scanning laser system was used for
excitation and detection of luminescence. Both the TSP and PSP were excited with an
Argon laser and luminescence was detected with a Hamamatsu PMT. The same system
was used on an Allied Signal F109 gas turbine engine giving the suction surface
pressure map at 14000 rpm shown in Figure 3
Figure 3: Fan blade pressure distribution at 14,000rpm
Characteristics of PSP:
As mentioned previously, PSP simply consists of a luminescent molecule suspended in some type of oxygen permeable binder. Currently, the majority of these binders are some form of silicone polymer. The vast majority of PSP formulations to date come in a liquid form that is suitable for use with normal
Spray-painting equipment and methods
Typically, in its simplest application, PSP is the topmost layer of a multilayer coating on a model surface. The PSP is usually applied over a white undercoat, which provides two related benefits. The white undercoating reflects a large portion of the light that is incident upon it, which results in the benefits of amplifying not only the excitation illumination, but the emission illumination as well.
Advantages:
As previously mentioned, pressure sensitive paints are used to measure surface pressures. The conventional methods of measuring these pressures are to apply pressure taps or transducers to a model, but these approaches have some significant disadvantages.
First of all, taps and transducers only allow measurements at discrete points on the model surface. The surface pressures at other locations on the model can only be interpolated from the known points. Another disadvantage is that taps and transducers are intrusive to the flow. Measurements cannot be taken downstream of other taps or transducers, since the flow is altered once it passes over the upstream disturbances. Finally, taps and transducers are time-consuming and expensive to use. Typical models used for determining surface loads in aircraft design typically cost $500,000 to $1 million, with approximately 30% of that cost going towards the pressure taps and their installation.
A relatively new method to surface pressure measurement utilizes pressure sensitive paint, or PSP. Pressure sensitive paint has numerous advantages over the more conventional pressure taps and transducers. The most obvious is that PSP is a field measurement, allowing for a surface pressure determination over the entire model, not just at discrete points. Hence, PSP provides a much greater spatial resolution than pressure taps, and disturbances in the flow are immediately observable.
PSP also has the advantage of being a non-intrusive technique. Use of PSP, for the most part, does not affect the flow around the model, allowing its use over the entire model surface. The use of PSP eliminates the need for a large number of pressure taps, which leads to more than one benefit. Since pressure taps do not need to be installed, models can be constructed in less time, and with less money than before. Also, since holes do not need to be drilled in the model for the installation of taps, the model strength is increased, and higher Reynolds numbers can be obtained. Not only does the PSP method reduce the cost of the model construction, but it also reduces the cost of the instrumentation needed for data collection. In addition, the equipment needed for PSP costs less than pressure taps, but it can also be easily reused for numerous models.
In aircraft design, PSP has the potential to save both time and money. The continuous data distribution on the model provided by PSP can easily be integrated over specific components, which can provide detailed surface loads. Since a model for use with the PSP technique is faster to construct, this allows for load data to be known much earlier in the design process.
Disadvantages:
Unfortunately, PSP is not without its undesirable characteristics. One of these characteristics is that the response of the luminescent molecules in the PSP coating degrades with time of exposure to the excitation illumination. This degradation occurs because of a photochemical reaction that occurs when the molecules are excited. Eventually, this degradation of the molecules determines the useful life of the PSP coating. This characteristic becomes more important for larger models, as the cost and time of PSP reapplication becomes a significant factor.
A second undesirable characteristic of PSP is that the emission intensity is affected by the local temperature. This behavior is due to the effect temperature has on the energy state of the luminescent molecules, and the oxygen permeability of the binder. This temperature dependence becomes even more significant in compressible flow tests, where the recovery temperature over the model surface is not uniform.
Experimental Setup
As seen below, the PSP experimental setup is composed of a number of separate elements. The specifications of each element are dependent upon the test conditions, objectives, and budget.
Typical PSP experimental setup
Illumination:
The illumination element ("light source") of the setup is used to excite the luminescent molecules in the PSP coating. Since the intensity of the emitted illumination is proportional to the excitation illumination, the source of illumination must be of sufficient power in the absorption spectrum of the PSP coating, and also have a stable output over time. For complex models with numerous surfaces, multiple illumination elements are often needed to achieve an adequate coverage of the model surface. Some examples of illumination elements are lasers, continuous and flash arc lamps, and simple incandescent lamps.
Imaging:
The imaging element ("camera") used in the experimental setup is heavily dependent upon the required results. In most cases, a good spatial resolution of the pressure distribution is required. Imaging elements that can provide a good spatial resolution include conventional still photography, low-light video cameras, or scientific grade CCD cameras. In most PSP applications, the electronic CCD cameras are the preferred imaging element due to their good spatial resolution and capability to reduce the data they acquire in real time. CCD cameras can be divided into two groups, conventional black and white video cameras and scientific grade CCD digital cameras.
Conventional black and white video cameras are attractive mainly due to their low cost. Typical cameras deliver an 8-bit intensity resolution over a 640 X 480 pixel spatial resolution. Even though conventional black and white video cameras are not precision scientific instruments, when coupled with a PC image processor, the results obtained are more than acceptable for qualitative analysis, and are potentially acceptable for quantitative analysis in certain conditions.
Scientific grade cooled CCD digital cameras, on the other hand, are precision scientific instruments that provide high-precision measurements, at the price of an increased cost. Typical cameras of this type can exhibit 16-bit intensity resolution and spatial resolution up to 2048 X 2048 pixels. For many PSP applications, the high resolution provided by these cameras is mandatory.
Images taken of pressures on an AIM-54 Phoenix missile separating from an F-14 fighter
Optical Filters:
In order to avoid erroneous illumination readings, it is necessary that the illumination element only output in the absorption spectrum, while the imaging element only records the emission spectrum. When lasers are used for excitation purposes, this is not an issue, as a laser only produces light in one wavelength. Most excitation sources, however, produce light in a wide spectrum. In order to prevent the excitation source spectrum from overlapping the emission spectrum, optical filters are placed over both the illumination element and the imaging element. This constraint also makes it necessary to conduct all PSP testing in a darkened test section; otherwise ambient light may contaminate the readings.
Data Acquisition & Post Processing:
The data acquisition and post processing in most PSP applications is done in a modular fashion. Initially the camera and computer acquire images for wind-on and wind-off conditions. These images can then be corrected and processed as necessary, either on the same or different machine. This modular approach provides a benefit in that the processing for small-scale tests can easily be done with common software running on PCs. In larger-scale facilities, however, much more computing power is needed, as runs can easily produce large amounts of data that need to be processed. This leads to the requirement of high power graphics workstations and high capacity storage facilities. It is also important to note that in the false color is typically added to the images in the post-processing phase in order to facilitate flow visualization (PSP is monochromatic).
2.Liquid Films
This method makes use of the contrast obtained on account of the unequal rates of evaporation of a liquid film in the laminar and turbulent regions. A film of some volatile oil is applied on the surface of the model prior to starting of the flow. When the air flow takes place over this surface the evaporation of the oil film is faster in the turbulent than in the laminar region. A clearer contrast is obtained by using black paint on the surface. This method can be easily employed for aerofoil blade surfaces in wind tunnels.
3. Smoke
Dense smoke introduced into a flow filed be the flow by the smoke generator can make stream line pattern visible. Smoke is generally injected into the flow through an array of nozzles or holes.
Kerosene oil can be conveniently used in smoke generator. The oil is heated to its boiling point by an electric coil: the smoke is formed by introducing the vapors into the air stream. Smoke can also be produced by many other methods. For better results smoke should be light, non-poisonous and free of deposits.
4) Hydrogen Bubbles
A very easy and effective method to visualize flow fields is the electrolytic generation of hydrogen bubbles with a platinum cathode (d 50 µm) on the model/ flow field. Typically, depending on cathode size, bubbles are very small, approximately 0.1 mm in diameter and are therefore very responsive such that they can completely trace the flow over a body or a complex flow field. Light sheet illumination produces internal reflection within the bubbles and hence visualization of the flow field. This method has general advantages over other methods because there is no contamination of the working fluid and it is very convenient to use, i.e. on electrical switching on/off. Moreover the cathode can be sized to produce bubbles over as much or as little of the model as necessary.
5. Optical methods
Optical method for studying a flow field is a valuable and widely used technique. The refractive index of the medium (Flow field) and the velocities of light through it are functions of the density field. For a given medium and wavelength of light refractive index is a function of density.
n=f(ρ)
For compressible flows this can be approximately expressed as
n = 1+β (P/ρref)
The basis of all the optical methods of study of compressible flow is the
variation of the density to a lesser or greater degree.
The three optical techniques described here are:
(i) the shadowgraph,
(ii) The interferometer and
(iii) The Schlieren system
Each of these techniques is based on either of the variations shown in the figure.
If the variations of refractive index in the flow field are measured the
Corresponding density variations can be determined from this. Density along
with pressure can yield the values of temperature, velocity of sound, Mach
number etc.
By using these techniques the flow field can either be observed on a screen or its permanent record is obtained on a photographic plate. The great advantage of optical methods over other methods is that the instruments are not inserted into the flow; thus the flow is undisturbed in this method of flow investigation.
Though the working of these methods is described here with reference to a model in the wind tunnel test section, they can no doubt be used in a variety of other situations. The flow direction in the test section is considered perpendicular (x-direction) to the plane of the paper, while the light beam is parallel to the span (z-direction) of the model an aerofoil or a wedge.
6. Shadow Technique
The arrangement adopted in this technique is shown in Fig. and is often referred to as a shadowgraph. The collimating lens provides a collimated beam of light from the source. This beam passes through the transparent walls of the test section. The shadow picture of the flow is directly obtained on a white screen placed on the other side of the test section. The degree of brightness on the screen is proportional to the second derivative of the density, a2P/ax2 the varying degree of brightness is a measure of the variations in the density field.
This technique gives clear pictures of the density variation in flows with shocks and combustion. The method is convenient because the required equipment is inexpensive and easy to operate.
Shadowgraph visualizes the density distribution in the flow around an axisymmetric model in a supersonic flow (Courtesy High-Speed Laboratory, Dept. of Aerospace Engineering, Delft University of Technology)
7. Interferometer Technique
In this technique the variation of density in the flow field is directly determined from the interference pattern obtained on the screen or a photographic plate.
The most widely used apparatus based on this technique is the Mach-Zehnder interferometer which is shown in figure. It consists of two plane half silvered mirrors (splitters) A and C, and two fully reflecting mirrors B and D. A parallel beam of light is obtained from the light source through the lens L and concave
mirror A. The splitter A reflects a part of this beam to the transparent section of the test section; the rays of light from the test section are reflected by the mirror D to a concave mirror M2 through the splitter C.
The part of the beam from Ml which passes through the splitter A is
reflected by the mirror B to the reference section; this has transparent walls
identical to those of the test section. Therefore these walls act as compensating
plates. The rays from the reference section also reach the concave mirror M2 after reflection from the splitter C. Thus the mirror A/2 collects the rays coming separately from the test section and the reference section and directs them on the screen or a photographic plate. After emerging from the splitter G the two parts of the light beam merges into one single coherent beam before reaching the mirror A/2; thus the pattern of illumination reflected on the screen would be uniform when there is no flow in the test section like the reference section. When the flow is established in the test section the beam of light passing through its density field will be out of phase with the beam coming through the reference section; in this case the mirror M2 will reflect an interference pattern on the screen; this represents the variable density pattern in the flow field.
While the interferometer is suitable for quantitative measurement of the density variation in a flow field it requires expensive equipment which is difficult to operate.
8. Schlieren technique
In this technique the density gradient (dp/dx) in the flow field is obtained
in terms of the varying degree of the brightness on the screen; the degree of brightness or intensity of illumination is proportional to the density gradient in the flow field.
The arrangement adopted in the Schlieren technique is shown in Fig.
A beam of light is sent through the test section from the light source by a
properly oriented concave mirror A/t.
The beam coming from the test section is reflected on to the screen or a photographic plate through two suitably located concave mirrors A/2 and A/3.
A sharp knife edge is inserted at the focal point of the mirror M2 to intercept about half the light. Thus in the absence of flow through the test section
the screen is illuminated uniformly by the light escaping the knife edge, But
in the presence of flow the rays of light are differently deflected (as in a prism)
on account of the variable density and the refractive index in the flow field
Therefore greater or lesser part of the light beam will now escape the knife
edge. This gives a varying intensity of the illumination on the screen.
9. Laser techniques
Application of lasers has provided the most powerful and reliable optical method of measuring velocity, direction and turbulence in liquids and gases over a wide range. In this method a laser beam is focused in the flow field (test section) where measurement of velocity, turbulence etc. is required. The scattered light from the minute solid particles in the flow is utilized as a signal for velocity measurement by employing a number of optical and electronic equipment such as special lenses, beam splitters, photo detectors, signal processors, timing devices, data acquisition system and a computer.
Laser is an acronym for Light Amplification by Stimulated Emission of Radiation; this is a strong source of monochromatic and coherent light. In this light emitted from one atom of the gas is employed lo amplify the original light. Helium-Neon lasers are commonly used in the range 0.5-100 milli watts they have comparatively lower cost and higher degree of reliability. Argon-ion lasers are used in higher power ranges i.e. 5 m watts-15 watts. CO2 lasers have a range of power between 1 and 100 watts. Higher power lasers produce greater noise level in the system which interferes with the signal.
The laser beam has a very high intensity of light, which can be damaging to the eyes and skin. It can also start spontaneous combustion of inflammable material. Therefore proper precautions must be taken while using a laser system. Solid particles in the flow act as scattering points at the measuring stations. If they are too small having density close to that of the flow their velocity can be taken as equal to the flow velocity; this condition is satisfied to a great extent, in liquid flows. In air flows the small naturally occurring solid particles should act as scattering point Very small particles may not produce a signal of sufficient strength and would be lost in the "system noise"; very large size particles
will give erroneous results.
Artificial seeding plant can also be employed to supply solid particles of
the desired size (about one micron).
Advantages of laser techniques
Lasers have wide applications in measurements in turbo machinery, wind tunnels, water tunnels, combustion studies, heat exchangers and many areas in aerospace and nuclear engineering. Their main advantages are:
• They employ a non-intrusive method which does not disturb the flow.
• They can measure velocities in regions which are inaccessible by other
devices. They offer a valuable device for boundary layer measurements
• No calibration is required; their working is independent of pressure
temperature and the density of the fluid.
• They can be used in a wide range of velocity (0.1-300 m/s); their
frequency response is very high.
• Velocity varies linearly with the signal over wide range.
• They can be easily interfaced with a computer.
Some of the disadvantages include their high cost, complex optical and electronic equipment, and requirement of well trained and skilled operators Installation of a laser system requires considerable preparation. In some cases a seeding plant is also needed which further adds to the already prohibitively high cost.
10. Particle Image Velocimetry (PIV)
Introduction
Particle image Velocimetry is usually a planar laser light sheet technique in which the light sheet is pulsed twice, and images of fine particles lying in the light sheet are recorded on a video camera or a photograph. The displacement of the particle images is measured in the plane of the image and used to determine the displacement of the particles in the flow. The most common way of measuring displacement is to divide the image plane into small interrogation spots and cross correlate the images from the two time exposures. The spatial displacement that produces the maximum cross-correlation statistically approximates the average displacement of the particles in the interrogation cell. Velocity associated with each interrogation spot is just the displacement divided by the time between the laser pulses.
If the velocity component perpendicular to the plane is needed, a stereographic system using two lenses can be used. Typically, the PIV measures on a 100 x 100 grid with accuracy between 0.2% and 5% of full scale and spatial resolution ~1mm. But, special design allow for larger and smaller values. Framing rates of most PIV cameras are of order 10Hz, compatible with pulse rates of Nd: Yag lasers, which is too slow for most cinematic recording. Special systems using rapidly pulsed metal vapor lasers and fast cinematic cameras or special high speed video cameras are able to measure up to ~10,000 frames per second. Micro PIV systems have been constructed to measure velocities in cells as small as a few microns.
Particle Image Velocimetry (PIV) is a whole-flow-field technique providing instantaneous velocity vector measurements in a cross-section of a flow. Two velocity components are measured, but use of a stereoscopic approach permits all three velocity components to be recorded, resulting in instantaneous 3D velocity vectors for the whole area. The use of modern CCD cameras and dedicated computing hardware, results in real-time velocity maps.
Features
• the technique is non-intrusive and measures the velocities of micron-sized particles following the flow.
• Velocity range from zero to supersonic.
• Instantaneous velocity vector maps in a cross-section of the flow.
• All three components may be obtained with the use of a stereoscopic arrangement
• With sequences of velocity vector maps, statistics, spatial correlations and other relevant data are available.
Results are similar to computational fluid dynamics, i.e. large eddy simulations, and real-time velocity maps are an invaluable tool for fluid dynamics researchers.
Principle
In PIV, the velocity vectors are derived from sub-sections of the target area of the particle-seeded flow by measuring the movement of particles between two light pulses:
The flow is illuminated in the target area with a light sheet. The camera lens images the target area onto the CCD array of a digital camera. The CCD is able to capture each light pulse in separate image frames.
Once a sequence of two light pulses is recorded, the images are divided into small subsections called inter-rogation areas (IA). The interrogation areas from each image frame, I1 and I2, are cross-correlated with each other, pixel by pixel.
The correlation produces a signal peak, identifying the common particle displacement, ΔX. An accurate measure of the displacement - and thus also the velocity is achieved with sub- pixel interpolation
A velocity vector map over the whole target area is obtained by repeating the cross-correlation for each interrogation area over the two image frames captured by the CCD camera.
The correlation of the two interrogation areas, I1 and I2, results in the particle displacement X, represented by a signal peak in the correlation C (X).
PIV images are visual, just follow the seeding
Recording both light pulses in the same image frame to track the movements of the particles gives a clear visual sense of the flow structure. In air flows, the seeding particles are typically oil drops in the range 1 µm to 5 µm.
For water applications, the seeding is typically polystyrene; polyamide or hollow glass spheres in the range 5 µm to 100 µm. Any particle that follows the flow satisfactorily and scatters enough light to be captured by the CCD camera can be used.
The number of particles in the flow is of some importance in obtaining a good signal peak in the cross-correlation. As a rule of thumb, 10 to 25 particle images should be seen in each interrogation area.
Double-pulsed particle images.
When the size of the interrogation area, the magnifications of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
Spatial resolution and dynamic range
Setting up a PIV measurement, the side length of the interrogation area, dIA, and the image magnification, s’/s are balanced against the size of the flow structures to be resolved. One way of expressing this is to require the velocity gradient to be small within the interrogation area:
The highest measurable velocity is constrained by particles traveling further than the size of the inter-rogation area within the time, Δt. The result is lost correlation between the two image frames and thus loss of velocity information. As a rule of thumb:
When the size of the interrogation area, the magnification of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
The third velocity component
In normal PIV systems, the third velocity component is ’’invisible’’ due to the geometry of the imaging. This third velocity component can be derived by using two cameras in a stereoscopic arrangement.
Experimental set-up for stereoscopic PIV measurements of the flow behind a car model.
11. Computer Graphics Flow Visualization
Experimental flow visualization is a starting point for flow visualization using computer graphics. The process of computer visualization is described in general, and applied to CFD. The heart of the process is the translation of physical to visual variables. Fluid mechanics theory and practice help to identify a set of ‘standard’ forms of visualization To prepare the flow data to be cast in visual form, several types of operations may have to be performed on the data
The Flow Visualization Process
Scientific visualization with computer-generated images can be generally conceived as a three-stage pipeline process We will use an extended version of this process model here.
• Data generation:
Production of numerical data by measurement or numerical simulations. Flow data can be based on flow measurements, or can be derived from analysis of images obtained with experimental visualization techniques as described earlier, using image processing. Numerical flow simulations often produce velocity fields, sometimes combined with scalar data such as pressure, temperature, or density.
• Data enrichment and enhancement:
Modification or selection of the data, to reduce the amount or improve the information content of the data. Examples are domain transformations, sectioning, thinning, interpolation, sampling, and noise filtering.
Visualization mapping:
translation of the physical data to suitable visual primitives and attributes. This is the central part of the process; the conceptual mapping involves the ‘design’ of a visualization: to determine what we want to see, and how to visualize it. Abstract physical quantities are cast into a visual domain of shapes, light, colour, and other optical properties. The actual mapping is carried out by computing derived quantities from the data suitable for direct visualization. For flow visualization, an example of this is the computation of particle paths from a velocity field.
• Rendering: transformation of the mapped data into displayable images. Typical operations here are viewing transformations, lighting calculations, hidden surface removal, scan conversion, and filtering (anti-aliasing and motion blur).
• Display: showing the rendered images on a screen. A display can be direct output from the rendering process, or be simply achieved by loading a pixel file into a frame buffer; it may also involve other operations such as image file format translation, data (de)compression, and colour map manipulations. For animation, a series of precomputed rendered images may be loaded into main memory of a workstation, and displayed using a simple playback program.
The style of visualization using numerically generated data is suggested by the formalism
underlying the numerical simulations. The two different analytical formulations of flow: Eulerian and Lagrangian, can be used to distinguish two classes of visualization styles:
• Eulerian: physical quantities are specified in fixed locations in a 3D field. Visualization
tends to produce static images of a whole study area. A typical Eulerian visualization is an arrow plot, showing flow direction arrows at all grid points.
• Lagrangian: physical quantities are linked to small particles moving with the flow through an area, and are given as a function of starting position and time. Visualization often leads to dynamic images (animations) of moving particles, showing only local information in the areas where particles move.
FIGURES:
Wake behind an automobile (tuft grid method)
)
Visualization with dye to study water flow in a river model
(Courtesy Delft Hydraulics)
Conclusions
In the preceding sections, we have reviewed different aspects of flow visualization mainly that of experimental visualization methods and an introduction have been made into the computer graphics flow visualization techniques. The connection between experimental and computer-aided flow visualization is now beginning to develop. The current strong demand for new flow visualization techniques, especially for large scale 3D numerical flow simulations, can only be satisfied by combining the efforts of fluid dynamics specialists, numerical analysts, and computer graphics experts. Additional knowledge will be required from perceptual and cognitive psychology, and artists and designers can also contribute to this effort.
Flow visualization will not be restricted to techniques for giving an intuitively appealing,
general impression of flow patterns, but will increasingly focus on more specific physical flow phenomena, such as turbulence, separations and reattachments, shock waves, or free liquid surfaces. Also, purely visual analysis of flow patterns will be increasingly complemented by algorithmic techniques to extract meaningful patterns and structures, that can be visualized separately.
Before they can be used as reliable research tools, the visualization techniques themselves
must also be carefully tested and validated. As we have seen in the previous sections, visualization involves a sequence of many processing steps, where approximations are frequently used and numerical errors can easily occur.
An important issue following development of visualization techniques, is the design and
implementation of flow visualization systems. Research in computer graphics flow visualization is still in its early stages, and especially 3D flow field visualization is still very much an open problem. At present, this is one of the great challenges of scientific visualization. This calls for a cooperative effort in the development of new
techniques at all stages of the flow visualization process.
References
JOURNAL PAPERS
1- Fluid Flow Visualization
Frits H. Post, Theo van Walsum
Delft University of Technology, The Netherlands*
Published in: Focus on Scientific Visualization, H. Hagen, H. Müller, G.M. Nielson (eds.), Springer Verlag, Berlin, 1993, pp. 1-40 (ISBN 3-540-54940-4)
2- Accuracy Of Pressure Sensitive Paint AIAA Journal, Vol. 39, No.1, January 2001
3- 9th International Symposium On Flow Visualisation, 2000
Flow around a three-dimensional bluff Body
S. Krajnovi´c 1and L. Davidson2
BOOKS
Compressible fluid flow YAHYA
Batchelor, G.K. (1967) An Introduction to Fluid Dynamics, Cambridge University Press
E-BOOK
Elsevier publication Fluid Flow Visualization
Abstract
The flow of air cannot be seen by the naked eye. The flow of water can be seen but not its streamlines or velocity distribution. The consolidated science which analyses the behavior of fluid invisible to the eye like this as image information is called ‘flow visualization’, and it is extremely useful for clarifying fluid phenomena. The saying ‘seeing is believing’ most aptly expresses the importance of flow visualization.
This report presents an overview of techniques for visualization of fluid flow data.
The popular techniques Temperature Sensitive Paint, Pressure Sensitive Paint, Tuft Method, Hydrogen Bubble, Optical Methods, and Particle Imaging Velocimetry are explained in details. The figures for various techniques have been presented at the end of this report.
This report mainly concentrated on the experimental fluid flow visualization. Still important computer aided visualization methods have summarized, like the particle imaging Velocimetry. A summary of all the important techniques have been presented at the end of the report. Introduction to the computer graphics flow visualization have been made so that the reader can have a basic idea of the techniques of graphics visualization.
Purposes and Problems of Flow Visualization
Flow visualization probably exists as long as fluid flow researches itself. Until recently,
Experimental flow visualization has been the main visualization aid in fluid flow research. Experimental flow visualization techniques are applied for several reasons:
• To get an impression of fluid flow around a scale model of a real object, without any
Calculations;
• As a source of inspiration for the development of new and better theories of fluid flow;
• To verify a new theory or model.
Though used extensively, these methods suffer from some problems. A fluid flow is often affected by the experimental technique, and not all fluid flow phenomena or relevant parameters can be visualized with experimental techniques. Also, the construction of small scale physical models and experimental equipment such as wind tunnels are expensive, and experiments are time consuming.
Recently a new type of visualization has emerged: computer-aided visualization. The increase of computational power has led to an increasing use of computers for numerical simulations. In the area of fluid dynamics, computers are extensively used to calculate velocity fields and other flow quantities, using numerical techniques to solve the governing Navier-Stokes equations. This has led to the emergence of Computational Fluid Dynamics (CFD) as a new field of research and practice.
To analyze the results of the complex calculations, computer visualization techniques are necessary. Humans are capable of understanding much more information when it is shown visually, rather than numerically. By using the computer not only for calculating the numerical data, but also for visualizing these data in an understandable way, the benefits of the increasing computational power are much greater.
The visualization of fluid flow simulation data may have several different purposes. One purpose is the verification of theoretical models in fundamental research. When a flow phenomenon is described by a model, this flow model should be compared with the ‘real’ fluid flow. The accuracy of the model can be verified by calculation and visualization of a flow with the model, and comparison of the results with experimental results. If the numerical results and the experimental flow are visualized in the same way, a qualitative verification by visual inspection can be very effective. Research in numerical methods for solving the flow equations can be supported. By visualizing the solutions found, but also by visualization of intermediate results during the iterative solution process.
Another purpose of fluid flow visualization is the analysis and evaluation of a design. For the design of a car, an aircraft, a harbor, or any other object that is functionally related with fluid flow, calculation and visualization of the fluid flow phenomena can be a powerful tool in design optimization and evaluation. In this type of applied research, communication of flow analysis results to others, including non-specialists, is important in the decision making process.
In practice, often both experimental and computer-aided visualization will be applied. Fluid flow visualization using computer graphics will be inspired by experimental visualization. Following the development of 3D flow solution techniques, there is especially an urgent need for visualization of 3D flow patterns. This presents many interesting but still unsolved problems to computer graphics research. Flow data are different in many respects from the objects and surfaces traditionally displayed by 3D computer graphics. New techniques are emerging for generating informative images of flow patterns; also, techniques are being developed to transform the flow visualization problem to display of traditional graphics primitives.
Experimental Flow Visualization
1. Pressure Sensitive Paint (psp) And Temperature Sensitive Paint (tsp)
The use of luminescent molecular probes for measuring surface temperature and pressure on wind tunnel models and flight vehicles offers the promise of enhanced spatial resolution and lower costs compared to traditional techniques. These new sensors are called temperature-sensitive paint (TSP) and pressure-sensitive paint (PSP).
Traditionally, arrays of thermocouples and pressure taps have been used to obtain surface temperature and pressure distributions. These techniques can be very labor-intensive and model/flight vehicle preparation costs are high when detailed maps of temperature and pressure are desired. Further, the spatial resolution is limited by the number of instrumentation locations chosen. By comparison, the TSP and PSP techniques provide a way to obtain simple, inexpensive, full-field measurements of temperature and pressure with much higher spatial resolution. Both TSP and PSP incorporate luminescent molecules in a paint which can be applied to any aerodynamic model surface. Figure 1 shows a schematic of a paint layer incorporating a luminescent molecule.
The paint layer is composed of luminescent molecules and a polymer binder material. The resulting ‘paint’ can be applied to a surface using a brush or sprayer. As the paint dries, the solvent evaporates and leaves behind a polymer matrix with luminescent molecules embedded in it. Light of the proper wavelength to excite the luminescent molecules in the paint is directed at the model and luminescent light of a longer wavelength is emitted by the molecules Using the proper filters, the excitation light and luminescent emission light can be separated and the intensity of the luminescent light can be determined using a photo detector. Through the photo-physical processes known as thermal- and oxygen quenching, the luminescent intensity of the paint emission is related to temperature or pressure. Hence, from the detected luminescent intensity, temperature and pressure can be determined. The polymer binder is an important ingredient of a luminescent paint used to adhere the paint to the surface of interest. In some cases, the polymer matrix is a passive anchor. In other cases, however, the polymer may affect significantly the photo physical behavior of the paint through a complicated interaction between the luminescent molecules and the macro-molecules of the polymer. A good polymer binder should be robust enough to sustain skin friction and other forces on the surface of an aerodynamic model. Also, it must be easy to apply and repair to the surface in a smooth, thin film.
For TSP, many commercially available resins and epoxies can be chosen serve as polymer binders if they are not oxygen permeable and do not degrade the activity of the luminophore molecules. In contrast, a good polymer binder for a PSP must have high oxygen permeability besides being robust and easy to apply.
The CCD camera system for luminescent paints is the most commonly used in aerodynamic testing. A schematic of this system is shown in Figure 2. The luminescent
Paint (TSP or PSP) is coated on the surface of the model. The paint is excited to
Luminance by the illumination source, such as a lamp or a laser. The luminescent intensity Image is filtered optically to eliminate the illuminating light and then captured by a CCD
Camera and transferred to a computer with a frame grabber board for image processing.
Both wind-on image (at temperature and pressure to be determined) and wind-off image
(At a known constant temperature and pressure) are obtained. The ratio between the wind on and wind-off images is taken after the dark current level image is subtracted from both
Images, yielding a relative luminescent intensity image. Using the calibration relations, the surface temperature and pressure distributions can be computed from the relative
Luminescent intensity image.
TSP has also been utilized as an approach to flow transition detection since convective heat transfer is much higher in turbulent flow than in laminar flow, TSP can visualize the surface temperature difference between turbulent and laminar regions. In low speed wind tunnel tests, the model is typically heated or cooled to enhance temperature variation across the transition line.
The PSP/TSP technique provides a promising tool for measuring surface pressure
distributions on a high-speed rotating blade at a high spatial resolution. Instrumentation is
Particularly difficult in the rotating environment and the pressure taps weaken the
Structure of the rotating blade. Recently, a test was performed to measure the chord wise pressure distributions on the rotor blades of a high speed axial flow compressor/ TSP (Ru(bpy)-Shellac) and PSP (Ru(ph2-phen) in GE RTV 118) were applied to alternating blades. The TSP provided the temperature distributions on the blades for
temperature correction of the PSP results. A scanning laser system was used for
excitation and detection of luminescence. Both the TSP and PSP were excited with an
Argon laser and luminescence was detected with a Hamamatsu PMT. The same system
was used on an Allied Signal F109 gas turbine engine giving the suction surface
pressure map at 14000 rpm shown in Figure 3
Figure 3: Fan blade pressure distribution at 14,000rpm
Characteristics of PSP:
As mentioned previously, PSP simply consists of a luminescent molecule suspended in some type of oxygen permeable binder. Currently, the majority of these binders are some form of silicone polymer. The vast majority of PSP formulations to date come in a liquid form that is suitable for use with normal
Spray-painting equipment and methods
Typically, in its simplest application, PSP is the topmost layer of a multilayer coating on a model surface. The PSP is usually applied over a white undercoat, which provides two related benefits. The white undercoating reflects a large portion of the light that is incident upon it, which results in the benefits of amplifying not only the excitation illumination, but the emission illumination as well.
Advantages:
As previously mentioned, pressure sensitive paints are used to measure surface pressures. The conventional methods of measuring these pressures are to apply pressure taps or transducers to a model, but these approaches have some significant disadvantages.
First of all, taps and transducers only allow measurements at discrete points on the model surface. The surface pressures at other locations on the model can only be interpolated from the known points. Another disadvantage is that taps and transducers are intrusive to the flow. Measurements cannot be taken downstream of other taps or transducers, since the flow is altered once it passes over the upstream disturbances. Finally, taps and transducers are time-consuming and expensive to use. Typical models used for determining surface loads in aircraft design typically cost $500,000 to $1 million, with approximately 30% of that cost going towards the pressure taps and their installation.
A relatively new method to surface pressure measurement utilizes pressure sensitive paint, or PSP. Pressure sensitive paint has numerous advantages over the more conventional pressure taps and transducers. The most obvious is that PSP is a field measurement, allowing for a surface pressure determination over the entire model, not just at discrete points. Hence, PSP provides a much greater spatial resolution than pressure taps, and disturbances in the flow are immediately observable.
PSP also has the advantage of being a non-intrusive technique. Use of PSP, for the most part, does not affect the flow around the model, allowing its use over the entire model surface. The use of PSP eliminates the need for a large number of pressure taps, which leads to more than one benefit. Since pressure taps do not need to be installed, models can be constructed in less time, and with less money than before. Also, since holes do not need to be drilled in the model for the installation of taps, the model strength is increased, and higher Reynolds numbers can be obtained. Not only does the PSP method reduce the cost of the model construction, but it also reduces the cost of the instrumentation needed for data collection. In addition, the equipment needed for PSP costs less than pressure taps, but it can also be easily reused for numerous models.
In aircraft design, PSP has the potential to save both time and money. The continuous data distribution on the model provided by PSP can easily be integrated over specific components, which can provide detailed surface loads. Since a model for use with the PSP technique is faster to construct, this allows for load data to be known much earlier in the design process.
Disadvantages:
Unfortunately, PSP is not without its undesirable characteristics. One of these characteristics is that the response of the luminescent molecules in the PSP coating degrades with time of exposure to the excitation illumination. This degradation occurs because of a photochemical reaction that occurs when the molecules are excited. Eventually, this degradation of the molecules determines the useful life of the PSP coating. This characteristic becomes more important for larger models, as the cost and time of PSP reapplication becomes a significant factor.
A second undesirable characteristic of PSP is that the emission intensity is affected by the local temperature. This behavior is due to the effect temperature has on the energy state of the luminescent molecules, and the oxygen permeability of the binder. This temperature dependence becomes even more significant in compressible flow tests, where the recovery temperature over the model surface is not uniform.
Experimental Setup
As seen below, the PSP experimental setup is composed of a number of separate elements. The specifications of each element are dependent upon the test conditions, objectives, and budget.
Typical PSP experimental setup
Illumination:
The illumination element ("light source") of the setup is used to excite the luminescent molecules in the PSP coating. Since the intensity of the emitted illumination is proportional to the excitation illumination, the source of illumination must be of sufficient power in the absorption spectrum of the PSP coating, and also have a stable output over time. For complex models with numerous surfaces, multiple illumination elements are often needed to achieve an adequate coverage of the model surface. Some examples of illumination elements are lasers, continuous and flash arc lamps, and simple incandescent lamps.
Imaging:
The imaging element ("camera") used in the experimental setup is heavily dependent upon the required results. In most cases, a good spatial resolution of the pressure distribution is required. Imaging elements that can provide a good spatial resolution include conventional still photography, low-light video cameras, or scientific grade CCD cameras. In most PSP applications, the electronic CCD cameras are the preferred imaging element due to their good spatial resolution and capability to reduce the data they acquire in real time. CCD cameras can be divided into two groups, conventional black and white video cameras and scientific grade CCD digital cameras.
Conventional black and white video cameras are attractive mainly due to their low cost. Typical cameras deliver an 8-bit intensity resolution over a 640 X 480 pixel spatial resolution. Even though conventional black and white video cameras are not precision scientific instruments, when coupled with a PC image processor, the results obtained are more than acceptable for qualitative analysis, and are potentially acceptable for quantitative analysis in certain conditions.
Scientific grade cooled CCD digital cameras, on the other hand, are precision scientific instruments that provide high-precision measurements, at the price of an increased cost. Typical cameras of this type can exhibit 16-bit intensity resolution and spatial resolution up to 2048 X 2048 pixels. For many PSP applications, the high resolution provided by these cameras is mandatory.
Images taken of pressures on an AIM-54 Phoenix missile separating from an F-14 fighter
Optical Filters:
In order to avoid erroneous illumination readings, it is necessary that the illumination element only output in the absorption spectrum, while the imaging element only records the emission spectrum. When lasers are used for excitation purposes, this is not an issue, as a laser only produces light in one wavelength. Most excitation sources, however, produce light in a wide spectrum. In order to prevent the excitation source spectrum from overlapping the emission spectrum, optical filters are placed over both the illumination element and the imaging element. This constraint also makes it necessary to conduct all PSP testing in a darkened test section; otherwise ambient light may contaminate the readings.
Data Acquisition & Post Processing:
The data acquisition and post processing in most PSP applications is done in a modular fashion. Initially the camera and computer acquire images for wind-on and wind-off conditions. These images can then be corrected and processed as necessary, either on the same or different machine. This modular approach provides a benefit in that the processing for small-scale tests can easily be done with common software running on PCs. In larger-scale facilities, however, much more computing power is needed, as runs can easily produce large amounts of data that need to be processed. This leads to the requirement of high power graphics workstations and high capacity storage facilities. It is also important to note that in the false color is typically added to the images in the post-processing phase in order to facilitate flow visualization (PSP is monochromatic).
2.Liquid Films
This method makes use of the contrast obtained on account of the unequal rates of evaporation of a liquid film in the laminar and turbulent regions. A film of some volatile oil is applied on the surface of the model prior to starting of the flow. When the air flow takes place over this surface the evaporation of the oil film is faster in the turbulent than in the laminar region. A clearer contrast is obtained by using black paint on the surface. This method can be easily employed for aerofoil blade surfaces in wind tunnels.
3. Smoke
Dense smoke introduced into a flow filed be the flow by the smoke generator can make stream line pattern visible. Smoke is generally injected into the flow through an array of nozzles or holes.
Kerosene oil can be conveniently used in smoke generator. The oil is heated to its boiling point by an electric coil: the smoke is formed by introducing the vapors into the air stream. Smoke can also be produced by many other methods. For better results smoke should be light, non-poisonous and free of deposits.
4) Hydrogen Bubbles
A very easy and effective method to visualize flow fields is the electrolytic generation of hydrogen bubbles with a platinum cathode (d 50 µm) on the model/ flow field. Typically, depending on cathode size, bubbles are very small, approximately 0.1 mm in diameter and are therefore very responsive such that they can completely trace the flow over a body or a complex flow field. Light sheet illumination produces internal reflection within the bubbles and hence visualization of the flow field. This method has general advantages over other methods because there is no contamination of the working fluid and it is very convenient to use, i.e. on electrical switching on/off. Moreover the cathode can be sized to produce bubbles over as much or as little of the model as necessary.
5. Optical methods
Optical method for studying a flow field is a valuable and widely used technique. The refractive index of the medium (Flow field) and the velocities of light through it are functions of the density field. For a given medium and wavelength of light refractive index is a function of density.
n=f(ρ)
For compressible flows this can be approximately expressed as
n = 1+β (P/ρref)
The basis of all the optical methods of study of compressible flow is the
variation of the density to a lesser or greater degree.
The three optical techniques described here are:
(i) the shadowgraph,
(ii) The interferometer and
(iii) The Schlieren system
Each of these techniques is based on either of the variations shown in the figure.
If the variations of refractive index in the flow field are measured the
Corresponding density variations can be determined from this. Density along
with pressure can yield the values of temperature, velocity of sound, Mach
number etc.
By using these techniques the flow field can either be observed on a screen or its permanent record is obtained on a photographic plate. The great advantage of optical methods over other methods is that the instruments are not inserted into the flow; thus the flow is undisturbed in this method of flow investigation.
Though the working of these methods is described here with reference to a model in the wind tunnel test section, they can no doubt be used in a variety of other situations. The flow direction in the test section is considered perpendicular (x-direction) to the plane of the paper, while the light beam is parallel to the span (z-direction) of the model an aerofoil or a wedge.
6. Shadow Technique
The arrangement adopted in this technique is shown in Fig. and is often referred to as a shadowgraph. The collimating lens provides a collimated beam of light from the source. This beam passes through the transparent walls of the test section. The shadow picture of the flow is directly obtained on a white screen placed on the other side of the test section. The degree of brightness on the screen is proportional to the second derivative of the density, a2P/ax2 the varying degree of brightness is a measure of the variations in the density field.
This technique gives clear pictures of the density variation in flows with shocks and combustion. The method is convenient because the required equipment is inexpensive and easy to operate.
Shadowgraph visualizes the density distribution in the flow around an axisymmetric model in a supersonic flow (Courtesy High-Speed Laboratory, Dept. of Aerospace Engineering, Delft University of Technology)
7. Interferometer Technique
In this technique the variation of density in the flow field is directly determined from the interference pattern obtained on the screen or a photographic plate.
The most widely used apparatus based on this technique is the Mach-Zehnder interferometer which is shown in figure. It consists of two plane half silvered mirrors (splitters) A and C, and two fully reflecting mirrors B and D. A parallel beam of light is obtained from the light source through the lens L and concave
mirror A. The splitter A reflects a part of this beam to the transparent section of the test section; the rays of light from the test section are reflected by the mirror D to a concave mirror M2 through the splitter C.
The part of the beam from Ml which passes through the splitter A is
reflected by the mirror B to the reference section; this has transparent walls
identical to those of the test section. Therefore these walls act as compensating
plates. The rays from the reference section also reach the concave mirror M2 after reflection from the splitter C. Thus the mirror A/2 collects the rays coming separately from the test section and the reference section and directs them on the screen or a photographic plate. After emerging from the splitter G the two parts of the light beam merges into one single coherent beam before reaching the mirror A/2; thus the pattern of illumination reflected on the screen would be uniform when there is no flow in the test section like the reference section. When the flow is established in the test section the beam of light passing through its density field will be out of phase with the beam coming through the reference section; in this case the mirror M2 will reflect an interference pattern on the screen; this represents the variable density pattern in the flow field.
While the interferometer is suitable for quantitative measurement of the density variation in a flow field it requires expensive equipment which is difficult to operate.
8. Schlieren technique
In this technique the density gradient (dp/dx) in the flow field is obtained
in terms of the varying degree of the brightness on the screen; the degree of brightness or intensity of illumination is proportional to the density gradient in the flow field.
The arrangement adopted in the Schlieren technique is shown in Fig.
A beam of light is sent through the test section from the light source by a
properly oriented concave mirror A/t.
The beam coming from the test section is reflected on to the screen or a photographic plate through two suitably located concave mirrors A/2 and A/3.
A sharp knife edge is inserted at the focal point of the mirror M2 to intercept about half the light. Thus in the absence of flow through the test section
the screen is illuminated uniformly by the light escaping the knife edge, But
in the presence of flow the rays of light are differently deflected (as in a prism)
on account of the variable density and the refractive index in the flow field
Therefore greater or lesser part of the light beam will now escape the knife
edge. This gives a varying intensity of the illumination on the screen.
9. Laser techniques
Application of lasers has provided the most powerful and reliable optical method of measuring velocity, direction and turbulence in liquids and gases over a wide range. In this method a laser beam is focused in the flow field (test section) where measurement of velocity, turbulence etc. is required. The scattered light from the minute solid particles in the flow is utilized as a signal for velocity measurement by employing a number of optical and electronic equipment such as special lenses, beam splitters, photo detectors, signal processors, timing devices, data acquisition system and a computer.
Laser is an acronym for Light Amplification by Stimulated Emission of Radiation; this is a strong source of monochromatic and coherent light. In this light emitted from one atom of the gas is employed lo amplify the original light. Helium-Neon lasers are commonly used in the range 0.5-100 milli watts they have comparatively lower cost and higher degree of reliability. Argon-ion lasers are used in higher power ranges i.e. 5 m watts-15 watts. CO2 lasers have a range of power between 1 and 100 watts. Higher power lasers produce greater noise level in the system which interferes with the signal.
The laser beam has a very high intensity of light, which can be damaging to the eyes and skin. It can also start spontaneous combustion of inflammable material. Therefore proper precautions must be taken while using a laser system. Solid particles in the flow act as scattering points at the measuring stations. If they are too small having density close to that of the flow their velocity can be taken as equal to the flow velocity; this condition is satisfied to a great extent, in liquid flows. In air flows the small naturally occurring solid particles should act as scattering point Very small particles may not produce a signal of sufficient strength and would be lost in the "system noise"; very large size particles
will give erroneous results.
Artificial seeding plant can also be employed to supply solid particles of
the desired size (about one micron).
Advantages of laser techniques
Lasers have wide applications in measurements in turbo machinery, wind tunnels, water tunnels, combustion studies, heat exchangers and many areas in aerospace and nuclear engineering. Their main advantages are:
• They employ a non-intrusive method which does not disturb the flow.
• They can measure velocities in regions which are inaccessible by other
devices. They offer a valuable device for boundary layer measurements
• No calibration is required; their working is independent of pressure
temperature and the density of the fluid.
• They can be used in a wide range of velocity (0.1-300 m/s); their
frequency response is very high.
• Velocity varies linearly with the signal over wide range.
• They can be easily interfaced with a computer.
Some of the disadvantages include their high cost, complex optical and electronic equipment, and requirement of well trained and skilled operators Installation of a laser system requires considerable preparation. In some cases a seeding plant is also needed which further adds to the already prohibitively high cost.
10. Particle Image Velocimetry (PIV)
Introduction
Particle image Velocimetry is usually a planar laser light sheet technique in which the light sheet is pulsed twice, and images of fine particles lying in the light sheet are recorded on a video camera or a photograph. The displacement of the particle images is measured in the plane of the image and used to determine the displacement of the particles in the flow. The most common way of measuring displacement is to divide the image plane into small interrogation spots and cross correlate the images from the two time exposures. The spatial displacement that produces the maximum cross-correlation statistically approximates the average displacement of the particles in the interrogation cell. Velocity associated with each interrogation spot is just the displacement divided by the time between the laser pulses.
If the velocity component perpendicular to the plane is needed, a stereographic system using two lenses can be used. Typically, the PIV measures on a 100 x 100 grid with accuracy between 0.2% and 5% of full scale and spatial resolution ~1mm. But, special design allow for larger and smaller values. Framing rates of most PIV cameras are of order 10Hz, compatible with pulse rates of Nd: Yag lasers, which is too slow for most cinematic recording. Special systems using rapidly pulsed metal vapor lasers and fast cinematic cameras or special high speed video cameras are able to measure up to ~10,000 frames per second. Micro PIV systems have been constructed to measure velocities in cells as small as a few microns.
Particle Image Velocimetry (PIV) is a whole-flow-field technique providing instantaneous velocity vector measurements in a cross-section of a flow. Two velocity components are measured, but use of a stereoscopic approach permits all three velocity components to be recorded, resulting in instantaneous 3D velocity vectors for the whole area. The use of modern CCD cameras and dedicated computing hardware, results in real-time velocity maps.
Features
• the technique is non-intrusive and measures the velocities of micron-sized particles following the flow.
• Velocity range from zero to supersonic.
• Instantaneous velocity vector maps in a cross-section of the flow.
• All three components may be obtained with the use of a stereoscopic arrangement
• With sequences of velocity vector maps, statistics, spatial correlations and other relevant data are available.
Results are similar to computational fluid dynamics, i.e. large eddy simulations, and real-time velocity maps are an invaluable tool for fluid dynamics researchers.
Principle
In PIV, the velocity vectors are derived from sub-sections of the target area of the particle-seeded flow by measuring the movement of particles between two light pulses:
The flow is illuminated in the target area with a light sheet. The camera lens images the target area onto the CCD array of a digital camera. The CCD is able to capture each light pulse in separate image frames.
Once a sequence of two light pulses is recorded, the images are divided into small subsections called inter-rogation areas (IA). The interrogation areas from each image frame, I1 and I2, are cross-correlated with each other, pixel by pixel.
The correlation produces a signal peak, identifying the common particle displacement, ΔX. An accurate measure of the displacement - and thus also the velocity is achieved with sub- pixel interpolation
A velocity vector map over the whole target area is obtained by repeating the cross-correlation for each interrogation area over the two image frames captured by the CCD camera.
The correlation of the two interrogation areas, I1 and I2, results in the particle displacement X, represented by a signal peak in the correlation C (X).
PIV images are visual, just follow the seeding
Recording both light pulses in the same image frame to track the movements of the particles gives a clear visual sense of the flow structure. In air flows, the seeding particles are typically oil drops in the range 1 µm to 5 µm.
For water applications, the seeding is typically polystyrene; polyamide or hollow glass spheres in the range 5 µm to 100 µm. Any particle that follows the flow satisfactorily and scatters enough light to be captured by the CCD camera can be used.
The number of particles in the flow is of some importance in obtaining a good signal peak in the cross-correlation. As a rule of thumb, 10 to 25 particle images should be seen in each interrogation area.
Double-pulsed particle images.
When the size of the interrogation area, the magnifications of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
Spatial resolution and dynamic range
Setting up a PIV measurement, the side length of the interrogation area, dIA, and the image magnification, s’/s are balanced against the size of the flow structures to be resolved. One way of expressing this is to require the velocity gradient to be small within the interrogation area:
The highest measurable velocity is constrained by particles traveling further than the size of the inter-rogation area within the time, Δt. The result is lost correlation between the two image frames and thus loss of velocity information. As a rule of thumb:
When the size of the interrogation area, the magnification of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
The third velocity component
In normal PIV systems, the third velocity component is ’’invisible’’ due to the geometry of the imaging. This third velocity component can be derived by using two cameras in a stereoscopic arrangement.
Experimental set-up for stereoscopic PIV measurements of the flow behind a car model.
11. Computer Graphics Flow Visualization
Experimental flow visualization is a starting point for flow visualization using computer graphics. The process of computer visualization is described in general, and applied to CFD. The heart of the process is the translation of physical to visual variables. Fluid mechanics theory and practice help to identify a set of ‘standard’ forms of visualization To prepare the flow data to be cast in visual form, several types of operations may have to be performed on the data
The Flow Visualization Process
Scientific visualization with computer-generated images can be generally conceived as a three-stage pipeline process We will use an extended version of this process model here.
• Data generation:
Production of numerical data by measurement or numerical simulations. Flow data can be based on flow measurements, or can be derived from analysis of images obtained with experimental visualization techniques as described earlier, using image processing. Numerical flow simulations often produce velocity fields, sometimes combined with scalar data such as pressure, temperature, or density.
• Data enrichment and enhancement:
Modification or selection of the data, to reduce the amount or improve the information content of the data. Examples are domain transformations, sectioning, thinning, interpolation, sampling, and noise filtering.
Visualization mapping:
translation of the physical data to suitable visual primitives and attributes. This is the central part of the process; the conceptual mapping involves the ‘design’ of a visualization: to determine what we want to see, and how to visualize it. Abstract physical quantities are cast into a visual domain of shapes, light, colour, and other optical properties. The actual mapping is carried out by computing derived quantities from the data suitable for direct visualization. For flow visualization, an example of this is the computation of particle paths from a velocity field.
• Rendering: transformation of the mapped data into displayable images. Typical operations here are viewing transformations, lighting calculations, hidden surface removal, scan conversion, and filtering (anti-aliasing and motion blur).
• Display: showing the rendered images on a screen. A display can be direct output from the rendering process, or be simply achieved by loading a pixel file into a frame buffer; it may also involve other operations such as image file format translation, data (de)compression, and colour map manipulations. For animation, a series of precomputed rendered images may be loaded into main memory of a workstation, and displayed using a simple playback program.
The style of visualization using numerically generated data is suggested by the formalism
underlying the numerical simulations. The two different analytical formulations of flow: Eulerian and Lagrangian, can be used to distinguish two classes of visualization styles:
• Eulerian: physical quantities are specified in fixed locations in a 3D field. Visualization
tends to produce static images of a whole study area. A typical Eulerian visualization is an arrow plot, showing flow direction arrows at all grid points.
• Lagrangian: physical quantities are linked to small particles moving with the flow through an area, and are given as a function of starting position and time. Visualization often leads to dynamic images (animations) of moving particles, showing only local information in the areas where particles move.
FIGURES:
Wake behind an automobile (tuft grid method)
)
Visualization with dye to study water flow in a river model
(Courtesy Delft Hydraulics)
Conclusions
In the preceding sections, we have reviewed different aspects of flow visualization mainly that of experimental visualization methods and an introduction have been made into the computer graphics flow visualization techniques. The connection between experimental and computer-aided flow visualization is now beginning to develop. The current strong demand for new flow visualization techniques, especially for large scale 3D numerical flow simulations, can only be satisfied by combining the efforts of fluid dynamics specialists, numerical analysts, and computer graphics experts. Additional knowledge will be required from perceptual and cognitive psychology, and artists and designers can also contribute to this effort.
Flow visualization will not be restricted to techniques for giving an intuitively appealing,
general impression of flow patterns, but will increasingly focus on more specific physical flow phenomena, such as turbulence, separations and reattachments, shock waves, or free liquid surfaces. Also, purely visual analysis of flow patterns will be increasingly complemented by algorithmic techniques to extract meaningful patterns and structures, that can be visualized separately.
Before they can be used as reliable research tools, the visualization techniques themselves
must also be carefully tested and validated. As we have seen in the previous sections, visualization involves a sequence of many processing steps, where approximations are frequently used and numerical errors can easily occur.
An important issue following development of visualization techniques, is the design and
implementation of flow visualization systems. Research in computer graphics flow visualization is still in its early stages, and especially 3D flow field visualization is still very much an open problem. At present, this is one of the great challenges of scientific visualization. This calls for a cooperative effort in the development of new
techniques at all stages of the flow visualization process.
References
JOURNAL PAPERS
1- Fluid Flow Visualization
Frits H. Post, Theo van Walsum
Delft University of Technology, The Netherlands*
Published in: Focus on Scientific Visualization, H. Hagen, H. Müller, G.M. Nielson (eds.), Springer Verlag, Berlin, 1993, pp. 1-40 (ISBN 3-540-54940-4)
2- Accuracy Of Pressure Sensitive Paint AIAA Journal, Vol. 39, No.1, January 2001
3- 9th International Symposium On Flow Visualisation, 2000
Flow around a three-dimensional bluff Body
S. Krajnovi´c 1and L. Davidson2
BOOKS
Compressible fluid flow YAHYA
Batchelor, G.K. (1967) An Introduction to Fluid Dynamics, Cambridge University Press
E-BOOK
Elsevier publication Fluid Flow Visualization
ERGONOMIC SEAT DESIGN
CONTENTS
Topic Page Number
1. Abstract…………………………………………………………………1
2. Ergonomics………………………………………………………………
3. Ergonomic Products……………………………………………………..
4. Uses of Ergonomics……………………………………………………..
5. Seat Design………………………………………………………………
6. Anthropometric aspects…………………………………………………
7. Seat Foam………………………………………………………………..
8. Conclusion………………………………………………………………
9. Bibliography…………………………………………………………….
ABSTRACT
The advancements made in the field of technology have helped the automotive sector in last decade, which has been its key customer. Safety and comfort of the passengers has emerged as one of the prime concerns for the vehicle manufacturers due to the stringent government regulations, which are updated with time.
The paper provides a brief overview of the conditions under which the body is completely comfortable. Hence they have been taken into account while designing the seats. The proper design of seats increases the aesthetics and ergonomics of the vehicle and also add to its ability as a safety feature. The feasibility of implementation of such sophisticated seats in Indian conditions has also been gauged.
ERGONOMICS .. ??
The term ergonomics comes from the Greek syllables “ERGON” which means “WORK”, AND “NOMOS” which means “LAWS”, first appeared in a Polish article published in 1857. The study of human factors did not gain much attention until World War 11. Accidents with military equipments were often blamed on human errors, but the investigations revealed that some were caused by poorly designed controls. The modern discipline ergonomics was born in United Kingdom on July 12, 1949, at a meeting of those interested in human work problems in the British navy. At another meeting held on February, 16, 1950, the term ergonomics was formally adopted for this growing discipline.
Today in the United States, ergonomics professionals belong to Human Factors and Ergonomics Society (HFES), an organization with over 5000 members interested in topics ranging from aging to aerospace to computers. Ergonomic design make consumer products safer, easier to use, and more reliable.
“Ergonomics or human factors are the scientific discipline concerned with the understanding of interactions among humans and other elements of a system.” i.e. it deals with the scientific study of the relationship between the humans and its environment. It is “fitting a job to a worker.”
ERGONOMICALLY DESIGNED PRODUCTS
An ergonomically designed toothbrush has a broad handle for easy grip, a bent neck for easier access to back teeth, and a bristle head shaped for better tooth surface contact. Ergonomic design has dramatically changed the interior appearance of automobiles. The steering wheel once a solid awkward disc – is now larger and padded for an easier, more comfortable grip. Its center is removed to improve the driver’s view of the instruments on the dashboard. Larger, contoured seats, adjustable to suit a variety of body sizes and posture preferences, have replaced the small, upright seats of the early automobiles. Equipped with seatbelts and airbags that prevent the face and neck from snapping backwards in the event of a collision, modern automobiles are not only comfortable but they are also safer. Virtually, all automotive and component manufacturers already recognize ergonomics as an important part of the vehicle design process.
An ergonomically designed chair
APPLICATIONS OF ERGONOMICS
Size and shape
Some years ago, researchers compared the relative positions of the controls on a lathe with the size of an average male worker. It was found that the lathe operator would have to stoop and move from side to side to operate the lathe controls. An ‘ideal’ sized person to fit the lathe would be just 4.5 feet tall, 2 feet across the shoulders and have an arm span of eight feet
This example epitomizes the shortcoming in design when no account has been taken of the user. People come in all shapes and sizes, and the ergonomist takes this variability into account when influencing the design process.
The branch of ergonomics that deals with human variability in size, shape and strength is called anthropometry.
Vision
Vision is usually the primary channel for information, yet systems are often so poorly designed that the user is unable to see the work area clearly. Many workers using computers cannot see their screens because of glare or reflections. Others, doing precise assembly tasks, have insufficient lighting and suffer eyestrain and reduced output as a result.
Sound
Sound can be a useful way to provide information, especially for warning signals. However, care must be taken not to overload this sensory channel. A recent airliner had 16 different audio warnings, far too many for a pilot to deal with in an emergency situation. A more sensible approach was to have just a few audio signals to alert the pilot to get information guidance from a visual display.
Job design
One goal of ergonomics is to design jobs to fit people. This means taking account of differences such as size, strength and ability to handle information for a wide range of users. Then the tasks, the workplace and tools are designed around these differences. The benefits are improved efficiency, quality and job satisfaction. The costs of failure include increased error rates and physical fatigue - or worse.
Human error
In some industries the impact of human errors can be catastrophic. These include the nuclear and chemical industries, rail and sea transport and aviation, including air traffic control.
When disasters occur, the blame is often laid with the operators, pilots or drivers concerned - and labeled 'human error'. Often though, the errors are caused by poor equipment and system design.
Ergonomists working in these areas pay particular attention to the mental demands on the operators, designing tasks and equipment to minimize the chances of misreading information or operating the wrong controls, for example.
SEAT DESIGN.
\
INTRODUCTION
Drivers spend a great deal of time behind the wheel and encounter a wide range of road conditions. Consequently, they are frequently exposed to shocks when their vehicles encounter irregularities. Shocks are transmitted to the driver when the seat suspension runs out of a travel, a phenomenon called as “bottoming” and “topping”. Heavy drivers who adjust the seat height away from the centre of seat travel are at increased risk of bottoming and topping. Researchers generally agree that exposure to shock increases the risk of spinal injury and lower back pain for drivers. Extremely high shock levels, such as those encountered in an accident, can cause compressive fracture of the spine, while chronic exposure to lower levels can lead to disc degeneration and lower back pain. In addition to increased health risks, drivers who experience frequent bottoming and topping report increased levels of fatigue. Topping and bottoming also presents a safety risk, as these events can cause the driver to temporarily lose control of the vehicle as his feet and hands are thrown off the pedals and steering wheel.
PASSIVE SEAT DESIGN
Today, most driver seats have an air – ride suspension and a passive damper to isolate the driver from vibration. The seats are typically designed to isolate the driver from moderate levels of vibration between 4 and 8 Hz, because the human body is most sensitive to seat vibrations in this range. However, seat suspensions designed to effectively isolate moderate vibration at 4 – 8 Hz are too soft to prevent the suspension from bottoming and topping when the vehicle encounters severe road conditions. Although some seat designs employ elastomer snubbers to absorb some of the impact energy of bottoming and topping, snubbers generally do not provide adequate protection for the driver. In addition, when a seat bottoms out, energy is stored in the snubbers and air spring and then released, propelling the seat and driver upward and often causing the suspension to top out. Stiffening the spring and / or damper provides additional protection from bottoming and topping, but at the expense of overall vibration isolation. Thus, passive seat design always sacrifice some degree of either vibration or shock isolation.
ANTHRAPOMETRY
In order to design a seat it is necessary to consider the structure of the human body. Various aspects such as seat height, width, depth, backrest and armrest depend on the dimensions of the body. The further discussion aims at understanding the anthropometrics of seat design.
The branch of ergonomics that deals with human variability in size, shape and strength is called anthropometry. Tables of anthropometric data are used by ergonomists to ensure that places and items that they are designing fit the users.
ANTHRAPOMETRIC ASPECTS OF SEAT DESIGN
SEAT HEIGHT ( H )
As the size of the seat height increases, beyond the popliteal height of the user, pressure is felt on the underside of the thighs. The resulting reduction of circulation to lower extremities may lead to ‘pins and needles’, swollen feet and considerable discomfort. As the height decreases the user will (a) tend to flex the spine more (due to the need to achieve an acute angle between thigh and trunk); (b) experience greater problems in standing up and sitting down, due to the distance through which his centre of gravity must move; and (c) require greater leg room. In general, therefore the optimal seat height for many purposes is close to the popliteal height and where this cannot be achieved a seat that is too slow is preferable to one that is high. If it is necessary to make a seat higher than this, shortening the seat and rounding off its front edge in order to minimize the under – thigh pressure may mitigate the ill effects. It is of overriding importance that the height of a seat should be enough for comfortable driving.
SEAT DEPTH (D)
If the depth is increased beyond the buttock – popliteal length, the user will – not be able to engage the backrest effectively without unacceptable pressure on the backs of the knees. Furthermore, the deeper the seat, the greater are the problems of standing up and sitting down. The lower limit of seat depth is less easy to define. As little as 300 mm will still support the ischial tuberosities and may well be satisfactory in some circumstances. Tall people some – times complain that the seats of easy chairs are too short – an inadequate backrest may well be to blame.
SEAT WIDTH
For purposes of support a width that is some 25 mm less on either side than the maximum breadth of the hips is all that is required – hence 350 mm will be adequate. However, clearance between armrests must be adequate for the largest user. In practice, allowing for clothing and leeway, a minimum of 500 mm is required .
BACKREST DIMENSIONS (C)
The higher the backrest, the more effective it will be in supporting the weight of the trunk. This is always desirable but in some circumstances other requirements such as the mobility of the shoulders may be more important. We may distinguish three varieties of backrest, each of which may be appropriate under certain circumstances : the low – level backrest; the medium – level backrest and the high – level backrest.
The low – level backrest provides support for the lumbar and lower thoracic region only and finishes below the level of the shoulder blades, thus allowing freedom of movement for shoulder and arms. e.g. Old – fashioned typists chairs generally had low – level backrests. To support the lower back and leave the shoulder regions free, an overall backrest height (C) of about 400 mm is required.
The medium – level backrest also supports the upper back and shoulder regions. Most modern seats fall into this category. For support to mid – thoracic level an overall backrest height of about 500 mm is required and for full shoulder support about 650 mm. A figure of 500 mm is often quoted for office chairs.
The high – level backrest gives full head and neck support – for the 95th percentile man an overall backrest height of 900 mm is required. Whatever its height, it will generally be preferable and sometimes essential for the backrest to be contoured to the shape of the spine, and in particular to give ‘positive support’ to the lumbar region in the form of a convexity or pad. To achieve this end, the backrest should support you in the same place as you would support yourself with your hands to ease an aching back.
A medium – or high – level backrest should be flat or slightly concave but the contouring of the backrest should in no cases be excessive in fact a curve that is too pronounced is probably worse than no curve at all. It was found that a lumbar pad that protrudes 40 mm from the main plane of the backrest at its maximum point would support the back in a position that approximates to that of normal standing.
BACKREST ANGLE OR RAKE (A)
As the backrest angle increases, a greater proportion of the weight of the trunk is supported – hence the compressive force between the trunk and pelvis is diminished. Furthermore, increasing the angle between trunk and thighs improves lordosis. However, the horizontal component of the compressive force increases. This will tend to drive the buttocks forward out of the seat counteracted by (a) an adequate seat tilt; (b) high – friction upholstery; (c) muscular effort from the subject. Increased rake also leads to increased difficulty in the stand – up sit – down action. Interaction of these factors, together with a consideration of task demands, will determine the optimal rake, which will commonly be between 100° and 110°. A pronounced rake is not compatible with a low – or medium – backrest since the upper parts of the body becomes highly unstable.
SEAT ANGLE OR TILT (B)
== i
A positive seat angle helps the user to maintain good contact with the backrest and helps to counteract any tendency to slide out of the seat. Excessive tilt reduces hip/ trunk angle and ease of standing up and sitting down. For most purposes 5 - 10 is a suitable compromise.
ARMRESTS
Armrests may give additional postural support and be an aid to standing up and sitting down. Armrests should support the fleshy part of the forearm, but unless very well padded they should not engage the bony parts of the elbow where the highly sensitive unlar nerve is near the surface; a gap of perhaps 100 mm between the armrest and the seatback may, therefore, be desirable. If the chair is to be used with a table the armrest should not limit access, since the armrest should not, in these circumstances, extend more than 350 mm in front of the seat back. An elbow rest hat is somewhat lower than sitting elbow height is probably preferable to one that is higher, if a relaxed posture is to be achieved. An elbow rest 200 – 250 mm above the seat surface is generally considered suitable.
LEGROOM
In a variety of sitting workstations the provision of adequate lateral, vertical, and forward legroom are essential if the user is to adopt a satisfactory posture.
Lateral legroom
Lateral legroom (e.g. the ‘knee hole’ of a desk) must give clearance for the thighs and knees.
Vertical legroom
Requirements will, in some circumstances, be determined by the knee weight of a tall user. Alternatively, thigh clearance above the highest seat position may be more relevant – adding the 95th percentile male popliteal height and thigh thickness gives a figure of 700 mm. Standards quote a minimum of 650mm for a normal seat.
Forward legroom
This is most difficult thing to calculate. At knee level clearance is determined by buttock – knee length from the back of a fixed seat. In this case clearance is determined by buttock – knee length minus abdominal depth, which will be around 425 mm for a male who is a 95th percentile in the former and a 5th percentile in the latter. At floor level an additional 150 mm clearance for the feet gives a figure of 795 mm from the seat back or 575 mm from the dashboard. All of these figures are based on the assumption of a 95th percentile male sitting on a seat that is adjusted to approximately his own popliteal height, with his lower legs vertical. If the seat height is in fact lower than this he will certainly wish to stretch his legs forward. A rigorous calculation of the 95th percentile clearance requirements in these circumstances would be complex but an approximate value may be derived as follows.
Consider a person of buttock – popliteal length B, popliteal height P, and foot length F sitting on a seat height H. He stretches out his legs so that his popliteal region is level with the seat surface. The total horizontal distance between buttocks and toes (D) is approximated by
D = B+ (P2 - H2)1/2 + F
(Ignoring the effects of ankle flexion.) Hence, in the extreme case, of a male who is a 95th percentile in the above dimensions, sitting on a seat that is 400mm in height requires a total floor level clearance of around 1190mm from the seat back or 970 mm from the table edge. (if he is also a 5th percentile in abdominal depth). Such a figure is needlessly generous for most purposes; most ergonomic sources quote a minimum clearance value of between 600 and 700 mm from the table edge. (Standards quote minima of 450 mm at the underside of the desktop and 600 mm at floor level and for 150 mm above).
SEAT FOAM
The photograph shows the seat foam sectioned down the centerline. The angle of the seat surface is about 17 from the horizontal with an extra slightly softer area to resist softer slipping of the seat bones. The area behind the seat bones has been carefully designed to provide an upward force on the buttock behind the seat bones to provide extra pelvic support, i.e. to resist backward rolling of the pelvis. To make this part of the foam feel soft, although the foam is hard, there is a gap between the foam and the steel of the seat pan.
If the various synthetic materials available they prefer FLEXIBLE POLYURETHANE FOAM (FPF). The reasons for this selection and the properties of this material have been discussed below.
FLEXIBLE POLYURETHANE FOAM:
Comfort, durability, safety and economy of operation are requirements of every modern mode of transportation. Manufacturers of private and commercial vehicles meet these prerequisites by using flexible polyurethane foam (FPM) in seating systems. It is one of the most versatile manufacturing materials today with proven reliability and flexibility. Polyurethane foam can be formulated to dampen the vibration that causes discomfort for the operator of a vehicle effectively.
Improvements and refinements to the “miracle material” introduced in the early 1950’s continue to expand FPF use and add valuable benefits within the transportation industry. Protecting the environment by recycling vehicle seats to keep them out of landfills is of paramount importance. Elimination of springs in vehicular seating has helped cut the cost of recovery for recycling by about two – thirds. However, the move to deep, all – foam seating brings challenges as well as progress. New research focuses on FPF varieties that dampen the vibration created by the dynamics of the vehicle and irregularities of the roadbed. Some designers specify that a seat should be highly resilient. Others are much more concerned about vibration – dampening qualities of the seat. The two specifications appear to be in opposition, but methods have been developed control vibration and provide resiliency.
H – POINT AND TRANSMISSIVITY
Transmissivity is the transportation industry’s term for the amount of vibration transmitted through the seating platform to the driver by the motion of the vehicles.
H – Point is the term used by the industry to identify the height at which the driver has adequate visibility for safety. The H – point is influenced by several situations that may develop in the foam with extended use. The primary influence is creep (settling or compression); which, in turn, is influenced by the amount of “work” put into the foam as measured by dynamic modulus (a measuring of the dynamic firmness), and dynamic hysteresis (a measuring of the change in dynamic firmness, providing information about the foam’s ability to maintain original dampening properties).
MECHANICAL EQUIVALENT OF A SEAT
Spring and dashpot models are used to predict foam-cushioning behavior.
ACTIVE SEAT DESIGN:
The active Management System of seat design overcomes the limitations of passive suspensions by sensing changing vibration characteristics and instantly adjusting damping force, providing effective vibration damping and shock protection across a much wider range of road conditions and driver weight than passive systems. This system consists of a controllable damper filled with magnetorheological fluid (“MR fluid”), a sensor arm that measures the position of the seat suspension, a controller with a programmed algorithm that adjusts the damping force in response to changes in seat position. The Motion system senses suspension position and adjusts damping force 500 times per second.
The system includes a ride mode switch with three settings (firm, medium, and soft) to allow the driver to easily adjust the feel of the ride. Testing performed by vibration experts on a popular on – highway seat model showed that replacing the standard passive shock absorber with Motion Master reduced the Vibration Dose Value and maximum acceleration (or “shock”) transmitted to a 200 – pound driver by up to 40% and 49%, respectively.
CONCLUSION
Though the condition of Indian roads is improving, the uncomfortability and danger caused to the passengers is not a hidden fact. In such a scenario the advancements made in the field of automobile design should be included in all the cars introduced on the Indian roads. Though this may increase the cost of production marginally, the aesthetics and ergonomics of the vehicle improve. This also increases the comfort level of the vehicle and provides a sense of satisfaction to the customers. Market research has showed that the purchasing power of an Indian has increased over the past few years. Hence he certainly wouldn’t mind spending the extra bit on his comfort. Eventually it is the customer who is the king in this liberalized economy and any loss to the customer is loss to the company. Driver comfort and accessibility of the vehicle’s controls during the car’s operation maximizes the performance capabilities of the car.
“Customers value Ergonomics and they are ready to pay for it”
BIBLIOGRAPHY
• www. motion – master. com
• www. pfa. org
• “THE MECHANICAL DESIGN PROCESS” a book by DAVID G. ULLMAN
Topic Page Number
1. Abstract…………………………………………………………………1
2. Ergonomics………………………………………………………………
3. Ergonomic Products……………………………………………………..
4. Uses of Ergonomics……………………………………………………..
5. Seat Design………………………………………………………………
6. Anthropometric aspects…………………………………………………
7. Seat Foam………………………………………………………………..
8. Conclusion………………………………………………………………
9. Bibliography…………………………………………………………….
ABSTRACT
The advancements made in the field of technology have helped the automotive sector in last decade, which has been its key customer. Safety and comfort of the passengers has emerged as one of the prime concerns for the vehicle manufacturers due to the stringent government regulations, which are updated with time.
The paper provides a brief overview of the conditions under which the body is completely comfortable. Hence they have been taken into account while designing the seats. The proper design of seats increases the aesthetics and ergonomics of the vehicle and also add to its ability as a safety feature. The feasibility of implementation of such sophisticated seats in Indian conditions has also been gauged.
ERGONOMICS .. ??
The term ergonomics comes from the Greek syllables “ERGON” which means “WORK”, AND “NOMOS” which means “LAWS”, first appeared in a Polish article published in 1857. The study of human factors did not gain much attention until World War 11. Accidents with military equipments were often blamed on human errors, but the investigations revealed that some were caused by poorly designed controls. The modern discipline ergonomics was born in United Kingdom on July 12, 1949, at a meeting of those interested in human work problems in the British navy. At another meeting held on February, 16, 1950, the term ergonomics was formally adopted for this growing discipline.
Today in the United States, ergonomics professionals belong to Human Factors and Ergonomics Society (HFES), an organization with over 5000 members interested in topics ranging from aging to aerospace to computers. Ergonomic design make consumer products safer, easier to use, and more reliable.
“Ergonomics or human factors are the scientific discipline concerned with the understanding of interactions among humans and other elements of a system.” i.e. it deals with the scientific study of the relationship between the humans and its environment. It is “fitting a job to a worker.”
ERGONOMICALLY DESIGNED PRODUCTS
An ergonomically designed toothbrush has a broad handle for easy grip, a bent neck for easier access to back teeth, and a bristle head shaped for better tooth surface contact. Ergonomic design has dramatically changed the interior appearance of automobiles. The steering wheel once a solid awkward disc – is now larger and padded for an easier, more comfortable grip. Its center is removed to improve the driver’s view of the instruments on the dashboard. Larger, contoured seats, adjustable to suit a variety of body sizes and posture preferences, have replaced the small, upright seats of the early automobiles. Equipped with seatbelts and airbags that prevent the face and neck from snapping backwards in the event of a collision, modern automobiles are not only comfortable but they are also safer. Virtually, all automotive and component manufacturers already recognize ergonomics as an important part of the vehicle design process.
An ergonomically designed chair
APPLICATIONS OF ERGONOMICS
Size and shape
Some years ago, researchers compared the relative positions of the controls on a lathe with the size of an average male worker. It was found that the lathe operator would have to stoop and move from side to side to operate the lathe controls. An ‘ideal’ sized person to fit the lathe would be just 4.5 feet tall, 2 feet across the shoulders and have an arm span of eight feet
This example epitomizes the shortcoming in design when no account has been taken of the user. People come in all shapes and sizes, and the ergonomist takes this variability into account when influencing the design process.
The branch of ergonomics that deals with human variability in size, shape and strength is called anthropometry.
Vision
Vision is usually the primary channel for information, yet systems are often so poorly designed that the user is unable to see the work area clearly. Many workers using computers cannot see their screens because of glare or reflections. Others, doing precise assembly tasks, have insufficient lighting and suffer eyestrain and reduced output as a result.
Sound
Sound can be a useful way to provide information, especially for warning signals. However, care must be taken not to overload this sensory channel. A recent airliner had 16 different audio warnings, far too many for a pilot to deal with in an emergency situation. A more sensible approach was to have just a few audio signals to alert the pilot to get information guidance from a visual display.
Job design
One goal of ergonomics is to design jobs to fit people. This means taking account of differences such as size, strength and ability to handle information for a wide range of users. Then the tasks, the workplace and tools are designed around these differences. The benefits are improved efficiency, quality and job satisfaction. The costs of failure include increased error rates and physical fatigue - or worse.
Human error
In some industries the impact of human errors can be catastrophic. These include the nuclear and chemical industries, rail and sea transport and aviation, including air traffic control.
When disasters occur, the blame is often laid with the operators, pilots or drivers concerned - and labeled 'human error'. Often though, the errors are caused by poor equipment and system design.
Ergonomists working in these areas pay particular attention to the mental demands on the operators, designing tasks and equipment to minimize the chances of misreading information or operating the wrong controls, for example.
SEAT DESIGN.
\
INTRODUCTION
Drivers spend a great deal of time behind the wheel and encounter a wide range of road conditions. Consequently, they are frequently exposed to shocks when their vehicles encounter irregularities. Shocks are transmitted to the driver when the seat suspension runs out of a travel, a phenomenon called as “bottoming” and “topping”. Heavy drivers who adjust the seat height away from the centre of seat travel are at increased risk of bottoming and topping. Researchers generally agree that exposure to shock increases the risk of spinal injury and lower back pain for drivers. Extremely high shock levels, such as those encountered in an accident, can cause compressive fracture of the spine, while chronic exposure to lower levels can lead to disc degeneration and lower back pain. In addition to increased health risks, drivers who experience frequent bottoming and topping report increased levels of fatigue. Topping and bottoming also presents a safety risk, as these events can cause the driver to temporarily lose control of the vehicle as his feet and hands are thrown off the pedals and steering wheel.
PASSIVE SEAT DESIGN
Today, most driver seats have an air – ride suspension and a passive damper to isolate the driver from vibration. The seats are typically designed to isolate the driver from moderate levels of vibration between 4 and 8 Hz, because the human body is most sensitive to seat vibrations in this range. However, seat suspensions designed to effectively isolate moderate vibration at 4 – 8 Hz are too soft to prevent the suspension from bottoming and topping when the vehicle encounters severe road conditions. Although some seat designs employ elastomer snubbers to absorb some of the impact energy of bottoming and topping, snubbers generally do not provide adequate protection for the driver. In addition, when a seat bottoms out, energy is stored in the snubbers and air spring and then released, propelling the seat and driver upward and often causing the suspension to top out. Stiffening the spring and / or damper provides additional protection from bottoming and topping, but at the expense of overall vibration isolation. Thus, passive seat design always sacrifice some degree of either vibration or shock isolation.
ANTHRAPOMETRY
In order to design a seat it is necessary to consider the structure of the human body. Various aspects such as seat height, width, depth, backrest and armrest depend on the dimensions of the body. The further discussion aims at understanding the anthropometrics of seat design.
The branch of ergonomics that deals with human variability in size, shape and strength is called anthropometry. Tables of anthropometric data are used by ergonomists to ensure that places and items that they are designing fit the users.
ANTHRAPOMETRIC ASPECTS OF SEAT DESIGN
SEAT HEIGHT ( H )
As the size of the seat height increases, beyond the popliteal height of the user, pressure is felt on the underside of the thighs. The resulting reduction of circulation to lower extremities may lead to ‘pins and needles’, swollen feet and considerable discomfort. As the height decreases the user will (a) tend to flex the spine more (due to the need to achieve an acute angle between thigh and trunk); (b) experience greater problems in standing up and sitting down, due to the distance through which his centre of gravity must move; and (c) require greater leg room. In general, therefore the optimal seat height for many purposes is close to the popliteal height and where this cannot be achieved a seat that is too slow is preferable to one that is high. If it is necessary to make a seat higher than this, shortening the seat and rounding off its front edge in order to minimize the under – thigh pressure may mitigate the ill effects. It is of overriding importance that the height of a seat should be enough for comfortable driving.
SEAT DEPTH (D)
If the depth is increased beyond the buttock – popliteal length, the user will – not be able to engage the backrest effectively without unacceptable pressure on the backs of the knees. Furthermore, the deeper the seat, the greater are the problems of standing up and sitting down. The lower limit of seat depth is less easy to define. As little as 300 mm will still support the ischial tuberosities and may well be satisfactory in some circumstances. Tall people some – times complain that the seats of easy chairs are too short – an inadequate backrest may well be to blame.
SEAT WIDTH
For purposes of support a width that is some 25 mm less on either side than the maximum breadth of the hips is all that is required – hence 350 mm will be adequate. However, clearance between armrests must be adequate for the largest user. In practice, allowing for clothing and leeway, a minimum of 500 mm is required .
BACKREST DIMENSIONS (C)
The higher the backrest, the more effective it will be in supporting the weight of the trunk. This is always desirable but in some circumstances other requirements such as the mobility of the shoulders may be more important. We may distinguish three varieties of backrest, each of which may be appropriate under certain circumstances : the low – level backrest; the medium – level backrest and the high – level backrest.
The low – level backrest provides support for the lumbar and lower thoracic region only and finishes below the level of the shoulder blades, thus allowing freedom of movement for shoulder and arms. e.g. Old – fashioned typists chairs generally had low – level backrests. To support the lower back and leave the shoulder regions free, an overall backrest height (C) of about 400 mm is required.
The medium – level backrest also supports the upper back and shoulder regions. Most modern seats fall into this category. For support to mid – thoracic level an overall backrest height of about 500 mm is required and for full shoulder support about 650 mm. A figure of 500 mm is often quoted for office chairs.
The high – level backrest gives full head and neck support – for the 95th percentile man an overall backrest height of 900 mm is required. Whatever its height, it will generally be preferable and sometimes essential for the backrest to be contoured to the shape of the spine, and in particular to give ‘positive support’ to the lumbar region in the form of a convexity or pad. To achieve this end, the backrest should support you in the same place as you would support yourself with your hands to ease an aching back.
A medium – or high – level backrest should be flat or slightly concave but the contouring of the backrest should in no cases be excessive in fact a curve that is too pronounced is probably worse than no curve at all. It was found that a lumbar pad that protrudes 40 mm from the main plane of the backrest at its maximum point would support the back in a position that approximates to that of normal standing.
BACKREST ANGLE OR RAKE (A)
As the backrest angle increases, a greater proportion of the weight of the trunk is supported – hence the compressive force between the trunk and pelvis is diminished. Furthermore, increasing the angle between trunk and thighs improves lordosis. However, the horizontal component of the compressive force increases. This will tend to drive the buttocks forward out of the seat counteracted by (a) an adequate seat tilt; (b) high – friction upholstery; (c) muscular effort from the subject. Increased rake also leads to increased difficulty in the stand – up sit – down action. Interaction of these factors, together with a consideration of task demands, will determine the optimal rake, which will commonly be between 100° and 110°. A pronounced rake is not compatible with a low – or medium – backrest since the upper parts of the body becomes highly unstable.
SEAT ANGLE OR TILT (B)
== i
A positive seat angle helps the user to maintain good contact with the backrest and helps to counteract any tendency to slide out of the seat. Excessive tilt reduces hip/ trunk angle and ease of standing up and sitting down. For most purposes 5 - 10 is a suitable compromise.
ARMRESTS
Armrests may give additional postural support and be an aid to standing up and sitting down. Armrests should support the fleshy part of the forearm, but unless very well padded they should not engage the bony parts of the elbow where the highly sensitive unlar nerve is near the surface; a gap of perhaps 100 mm between the armrest and the seatback may, therefore, be desirable. If the chair is to be used with a table the armrest should not limit access, since the armrest should not, in these circumstances, extend more than 350 mm in front of the seat back. An elbow rest hat is somewhat lower than sitting elbow height is probably preferable to one that is higher, if a relaxed posture is to be achieved. An elbow rest 200 – 250 mm above the seat surface is generally considered suitable.
LEGROOM
In a variety of sitting workstations the provision of adequate lateral, vertical, and forward legroom are essential if the user is to adopt a satisfactory posture.
Lateral legroom
Lateral legroom (e.g. the ‘knee hole’ of a desk) must give clearance for the thighs and knees.
Vertical legroom
Requirements will, in some circumstances, be determined by the knee weight of a tall user. Alternatively, thigh clearance above the highest seat position may be more relevant – adding the 95th percentile male popliteal height and thigh thickness gives a figure of 700 mm. Standards quote a minimum of 650mm for a normal seat.
Forward legroom
This is most difficult thing to calculate. At knee level clearance is determined by buttock – knee length from the back of a fixed seat. In this case clearance is determined by buttock – knee length minus abdominal depth, which will be around 425 mm for a male who is a 95th percentile in the former and a 5th percentile in the latter. At floor level an additional 150 mm clearance for the feet gives a figure of 795 mm from the seat back or 575 mm from the dashboard. All of these figures are based on the assumption of a 95th percentile male sitting on a seat that is adjusted to approximately his own popliteal height, with his lower legs vertical. If the seat height is in fact lower than this he will certainly wish to stretch his legs forward. A rigorous calculation of the 95th percentile clearance requirements in these circumstances would be complex but an approximate value may be derived as follows.
Consider a person of buttock – popliteal length B, popliteal height P, and foot length F sitting on a seat height H. He stretches out his legs so that his popliteal region is level with the seat surface. The total horizontal distance between buttocks and toes (D) is approximated by
D = B+ (P2 - H2)1/2 + F
(Ignoring the effects of ankle flexion.) Hence, in the extreme case, of a male who is a 95th percentile in the above dimensions, sitting on a seat that is 400mm in height requires a total floor level clearance of around 1190mm from the seat back or 970 mm from the table edge. (if he is also a 5th percentile in abdominal depth). Such a figure is needlessly generous for most purposes; most ergonomic sources quote a minimum clearance value of between 600 and 700 mm from the table edge. (Standards quote minima of 450 mm at the underside of the desktop and 600 mm at floor level and for 150 mm above).
SEAT FOAM
The photograph shows the seat foam sectioned down the centerline. The angle of the seat surface is about 17 from the horizontal with an extra slightly softer area to resist softer slipping of the seat bones. The area behind the seat bones has been carefully designed to provide an upward force on the buttock behind the seat bones to provide extra pelvic support, i.e. to resist backward rolling of the pelvis. To make this part of the foam feel soft, although the foam is hard, there is a gap between the foam and the steel of the seat pan.
If the various synthetic materials available they prefer FLEXIBLE POLYURETHANE FOAM (FPF). The reasons for this selection and the properties of this material have been discussed below.
FLEXIBLE POLYURETHANE FOAM:
Comfort, durability, safety and economy of operation are requirements of every modern mode of transportation. Manufacturers of private and commercial vehicles meet these prerequisites by using flexible polyurethane foam (FPM) in seating systems. It is one of the most versatile manufacturing materials today with proven reliability and flexibility. Polyurethane foam can be formulated to dampen the vibration that causes discomfort for the operator of a vehicle effectively.
Improvements and refinements to the “miracle material” introduced in the early 1950’s continue to expand FPF use and add valuable benefits within the transportation industry. Protecting the environment by recycling vehicle seats to keep them out of landfills is of paramount importance. Elimination of springs in vehicular seating has helped cut the cost of recovery for recycling by about two – thirds. However, the move to deep, all – foam seating brings challenges as well as progress. New research focuses on FPF varieties that dampen the vibration created by the dynamics of the vehicle and irregularities of the roadbed. Some designers specify that a seat should be highly resilient. Others are much more concerned about vibration – dampening qualities of the seat. The two specifications appear to be in opposition, but methods have been developed control vibration and provide resiliency.
H – POINT AND TRANSMISSIVITY
Transmissivity is the transportation industry’s term for the amount of vibration transmitted through the seating platform to the driver by the motion of the vehicles.
H – Point is the term used by the industry to identify the height at which the driver has adequate visibility for safety. The H – point is influenced by several situations that may develop in the foam with extended use. The primary influence is creep (settling or compression); which, in turn, is influenced by the amount of “work” put into the foam as measured by dynamic modulus (a measuring of the dynamic firmness), and dynamic hysteresis (a measuring of the change in dynamic firmness, providing information about the foam’s ability to maintain original dampening properties).
MECHANICAL EQUIVALENT OF A SEAT
Spring and dashpot models are used to predict foam-cushioning behavior.
ACTIVE SEAT DESIGN:
The active Management System of seat design overcomes the limitations of passive suspensions by sensing changing vibration characteristics and instantly adjusting damping force, providing effective vibration damping and shock protection across a much wider range of road conditions and driver weight than passive systems. This system consists of a controllable damper filled with magnetorheological fluid (“MR fluid”), a sensor arm that measures the position of the seat suspension, a controller with a programmed algorithm that adjusts the damping force in response to changes in seat position. The Motion system senses suspension position and adjusts damping force 500 times per second.
The system includes a ride mode switch with three settings (firm, medium, and soft) to allow the driver to easily adjust the feel of the ride. Testing performed by vibration experts on a popular on – highway seat model showed that replacing the standard passive shock absorber with Motion Master reduced the Vibration Dose Value and maximum acceleration (or “shock”) transmitted to a 200 – pound driver by up to 40% and 49%, respectively.
CONCLUSION
Though the condition of Indian roads is improving, the uncomfortability and danger caused to the passengers is not a hidden fact. In such a scenario the advancements made in the field of automobile design should be included in all the cars introduced on the Indian roads. Though this may increase the cost of production marginally, the aesthetics and ergonomics of the vehicle improve. This also increases the comfort level of the vehicle and provides a sense of satisfaction to the customers. Market research has showed that the purchasing power of an Indian has increased over the past few years. Hence he certainly wouldn’t mind spending the extra bit on his comfort. Eventually it is the customer who is the king in this liberalized economy and any loss to the customer is loss to the company. Driver comfort and accessibility of the vehicle’s controls during the car’s operation maximizes the performance capabilities of the car.
“Customers value Ergonomics and they are ready to pay for it”
BIBLIOGRAPHY
• www. motion – master. com
• www. pfa. org
• “THE MECHANICAL DESIGN PROCESS” a book by DAVID G. ULLMAN
Electronic Stability Program
CONTENTS
INTRODUCTION
WHAT IS ESC
IMPORTANCE
COMPONENTS
WORKING
ADVANTAGES
APPLICATION
SUMMARY
Electronic Stability Program (ESP)
Electronic Stability Program (ESP) is an interactive high-tech safety system that significantly improves the stability of a vehicle in all driving situations .When starting off, When driving it self and when braking and so increases the driver’s chances of avoiding a potential accident. ESP helps keep the driver in control of the vehicle even in critical situations.
ESP is based on already familiar components; the anti-lock brake system (ABS) and traction control (TCS) and also include electronic brake power distribution (EBD) and engine drag torque control (EDC).
ESP is an active safety system which improves vehicle stability in all driving conditions. It operates by actuating the brakes individually on one or more wheels on the front or rear axle. ESP stabilizes the vehicle when cornering, braking ,or during non-driven coasting to keep it on the road and in the desired line.
ESP is a registered trademark of the Robert Bosch GmbH and used originally for Mercedes-Benz.
ESP compares the driver's intended direction in steering and braking inputs, to the vehicle's response, via lateral acceleration, rotation (yaw) and individual wheel speeds. ESP then brakes individual front or rear wheels and/or reduces excess engine power as needed to help correct understeer (plowing) or overseer (fishtailing). ESP also integrates all-speed traction control, which senses drive-wheel slip under acceleration and individually brakes the slipping wheel or wheels, and/or reduces excess engine power, until control is regained. ESP cannot override a car's physical limits. If a driver pushes the possibilities of the car's chassis and ESP too far, ESP cannot prevent an accident.
Stability control equipment is now generally known as electronic stability control or ESC, a category recognized by the Society of Automotive Engineers. Electronic stability control combines anti-lock brakes, traction control and yaw control (yaw is spin around a vertical axis). To grasp how it works, think of steering a canoe. If you want the canoe to turn or rotate to the right, you plant the paddle in the water on the right to provide a braking moment on the right side. The canoe pivots or rotates to the right. ESC fundamentally does the same to assist the driver.
The electronic stability program (ESP) is a further enhancement to the anti-lock braking system (ABS) and traction control system (TCS). The ESP is designed to detect a difference between the driver's control inputs and the actual response of the vehicle. When differences are detected, the system intervenes by providing braking forces to the appropriate wheels to correct the path of the vehicle. This automatic reaction is engineered for improved vehicle stability, particularly during severe cornering and on low-friction road surfaces, by helping to reduce over-steering and under-steering.
To implement ESP functionality, additional sensors must be added to the ABS system. A steering wheel angle sensor is used to detect driver input with a yaw rate sensor and a low-G sensor that measure the vehicle response. Some ESP systems include a connection to the powertrain controller of the vehicle to enable reductions in engine torque when required.
As soon as skidding becomes imminent, the ESP Electronic Stability Program prevents it immediately. ESP continually monitors where the driver is steering and where the vehicle is actually going. When instability threatens, ESP selectively can brake each wheel individually, and intervene in the engine-management system. ESP stabilizes the vehicle and makes it more controllable in critical situations.
Rapid intervention: briefly applied braking pressure keeps the car on track
One of the strengths of the Electronic Stability Program is the speed with which it works: the sensing of oversteer and understeer, and the automatic braking intervention, are all completed within fractions of a second. For example if the rear of the car starts to swing out when taking a corner too fast, the ESP microcomputer first of all reduces engine power, thus increasing the lateral forces at the rear wheels. If this is not enough to eliminate the skidding tendency, the system also applies the brakes to the outer front wheel. The braking counteracts the critical rotational movement and restores stability. The simultaneous reduction in speed has further benefits for safety
Fig 1. Under steering drive
When ESP corrects the vehicle , it is not a one-off event which is completed after a brief application of the brakes. The stabilization is an ongoing process which is continuously adapted to take account of situational changes in the dynamics of the vehicle, until the risk of skidding is eliminated. This adaptive control requires the sensors and actuators in the Electronic Stability Program to react and adapt with extreme speed. The system has to cope not only with fast lane changes or patches of black ice, it must also function whatever the carload or tyre tread depth.
ESP was developed and tested with the aid of the most advanced techniques available, which systematically evaluated all potential malfunctions. Using these techniques, every conceivable system error was analyzed and methods were developed to eliminate the risk of malfunction. Amongst other things, the individual ESP components carry out self-checking routines at regular intervals. For example the vitally important yaw sensor is checked each time it supplies information, at intervals of just 20 milliseconds. The switch disables ESP's capability to reduce the engine torque. It also reduces the ESP intervention threshold to about 20%
How do I know ESP is working?
ESP monitors the vehicle's response to the driver's steering and braking inputs to detect oversteer or understeer. If sensors detect that a skidding condition is developing, ESP brakes individual front or rear wheels and/or reduces excess power as needed to help keep the vehicle going in the direction the driver is steering.
ESP could be realized especially through the remarkable progress in modern microelectronics. Sensors constantly record the driver and vehicle behavior and send their data to an electronic control unit. It compares the current driving condition with an appropriate nominal condition for the respective situation, and thus detects impending swerving within fractions of a second. If the car deviates from the calculated “ideal line”, ESP intervenes according to a special logic, and helps to keep the vehicle on track through accurately proportioned brake impulses at the front and rear axle as well as a reduction in engine torque. Thus, the system helps to correct driving errors and swerving movements, that are caused by slipperiness, wetness, gravel or other adverse road conditions. The stabilization takes place permanently – within the physical limits – and adjusts to the vehicle movements caused by the respective situation.
The triangle in the center of the speedometer flashes when ESP intervenes; either with ESP switched on or off. It's a reminder to adjust your speed to the prevailing road conditions, usually by reducing it. If instead one "steps on it", with ESP ON, the engine power may be reduced to prevent a potentially critical situation.
Electronic stability program
The standard-fitted ESP system selectively applies braking forces to the front and rear wheels in such a way as to reduce the risk of skids and slides and help the driver maintain control in critical situations. The system extends the technology of the anti-lock braking and acceleration skid control systems with a range of additional sensors which are used principally to detect yaw motion.
The ESP computer continuously compares the actual behaviour of the vehicle with the computed ideal values. The moment the car deviates from the direction intended by the driver, specially developed control logic causes the system to intervene with split-second speed to bring the car back on track. It does this in two ways:
1)Precisely controlled braking at one or more wheels 2)Reducing engine power.
ESP in this way helps to stabilize the vehicle in critical situations.
Active safety systems help prevent accidents]
1) Antilock braking system ABS
2) Traction control system TCS
Antilock Braking System (ABS)
Laying the groundwork for stability control, in the mid-80s Bosch brought the antilock braking system (ABS) to market through Mercedes and BMW. As most consumers probably know by now, ABS has become a standard feature on many new cars. It works by sensing and preventing wheel lock-up, thereby improving the vehicle's traction and enhancing steerability during hard braking.
1) Prevents the wheels from locking and thus allows avoiding obstacles.
2) The vehicle remains under control even while braking on one-sided slippery road.
3) The stopping distance is usually shortened compared to locked wheels
Traction Control System (TCS)
1) Fast interventions in engine management and brakes prevent the driven wheels from spinning
2) Safe drive off is possible even on one-sided slippery road
3) TCS prevents the vehicle from skidding when accelerating too much in a turn.
What does ESP do?
ESP actively enhances vehicle stability (staying in lane and in direction)
1) Through interventions in the braking system or the engine management.
2) To prevent critical situations (i.e. skidding), that might lead
to an accident
3) To minimize the risk of side crashes.
What is so special about ESP?
ESP watches out:-
1) Surveys the vehicle’s behavior (longitudinal and lateral dynamics)
2) Watches the driver’s commands (Steering angle, brake pressure, engine torque)
3) Is continuously active in the background.
ESP knows:
Recognizes critical situations –in many cases before the driver does
Considers the possible ways of intervening:
Wheel-individual brake pressure application Intervention in the
engine management.
Frequent cause for accidents:
The driver loses control of his vehicle. I.e. through
1) Speeding
2) Misinterpretation of the course or the road condition
3) Sudden swerving.
25% of all accidents involving severe personal injury are caused by skidding.
(Source: GDV – General Association of German Insurance Companies)
60% of all accidents with fatal injuries are caused by side crashs. These side crashs are mainly caused by skidding because of excessive speed, driving errors or excessive steering movements Source: GDV – General Association of German Insurance Companies)
What are the components of ESP?
The Bosch ESP components:
1) Hydraulic modulator with attached ECU
2) Wheel-speed sensors
3) Steering-angle sensor
4) Yaw-rate and lateral acceleration sensor
5) Communication with engine management.
First generation: complex hydraulic system develops braking pressure
The central control unit of the first-generation ESP system comprised two microprocessors, each with 48 kilobytes of memory, and the hydraulic system consisted of a pressurising pump, a charge piston and a central hydraulic unit. The pressurising pump was required for fast and reliable development of braking pressure under all temperature conditions. The hydraulic unit distributed the pressure individually to the wheels
WORKING
The heart of ESP is a yaw velocity sensor which resembles the ones used in aircraft and space vehicles. Like a compass, it constantly monitors the exact altitude of the car and registers every incipient spin. Other sensors report how high the current brake pressure is, what the position of the steering wheel is, how great the lateral acceleration is, what the speed is and how big the difference in wheel speed is. Whenever handling becomes instable, the necessary commands are executed and the vehicle is brought under control in a fraction of a second.
The ESP continuously compares the actual driving condition conveyed by the sensor with the driver’s intention. The software recognizes where the driver wants to steer via the anti-lock brake system sensors from the speeds of the four wheels and by the steering movement. This is recorded by another sensor mounted on the steering column. If there is now a difference between the driver’s intent determined in a vehicle model inside the computer and the current driving condition, which cannot be compensated without increased effort by the driver, ESP becomes active—upto 150 times per second. If there is a tendency to understeer, ESP brakes the wheel on the inside of curve as a priority and brings the car back on to the desired course. If the vehicle is oversteering toward the edge of the road, ESP brakes the outside of the car generating a moment opposed to the yawing moment that results in a driving condition that is easy to correct.
ESP works according the principle of an “observer”: Sensors acquire driver and vehicle behavior, send their data to a powerful microcomputer, that is loaded with a mathematical model. Thus, the actual state of the vehicle is compared with a nominal state appropriate for the respective situation, and impending swerving is detected.
From a physical standpoint, that swerving is nothing else than the turning of the vehicle about its own vertical axis. The faster that turning takes place, the greater the swerving movement and accident risk. Only: For the reliable measurement of the rotational speed, a complicated system was required, which then only existed in aerospace technology: a so-called turning or yaw rate sensor, which was too susceptible to breakdown and too expensive for automotive applications
however. Therefore, the experts developed a comparable measuring element for the stability program, which consists of a small hollow cylinder made of steel. Quartzes excite it to defined vibrations,that shift through the rotational movement of the vehicle. To compensate for that shift, an electrical potential was needed, whose value is the measuring signal for the rotational speed of the car.
In addition to the rotational speed, the ESP computer processes additional sensor information through the respective desire of the driver and the actual behavior of the vehicle:
The steering angle sensor measures the turn-in of the steering wheel and thus acquires where the driver wants to go.
The wheel speed sensors register the speed of the vehicle and the tendency of the wheels to loose adhesion with the road surface.
The lateral acceleration sensor detects when the vehicle is drifting off laterally.
The rotational speed sensor is the heart of the electronic stability program. It measures the rotational movement of the vehicle. If it deviates from the calculated “ideal line”, impending swerving is detected.
The admission pressure sensor detects the brake pressure input by the driver. In addition, the ESP ECU is connected with the engine and automatic transmission per CAN data bus (Controller Area Network), so that it also receives the current data about engine torque, gas pedal position and gear ratio at any time.
Constant Control: ESP Is Ready in Any Driving Situation
While driving, the ESP computer constantly compares the actual vehicle behavior with the programmed nominal values. If the vehicle deviates from the safe “ideal line”, the system intervenes with lightning speed according to a specially developed logic, and can bring the vehicle back onto the right course in two ways: Through exactly measured out brake impulses at the front or rear axle, and through reduction of the engine torque. Here, within the limits of physics, ESP corrects driving errors as well as swerving movements caused by ice, wetness, gravel or other adverse road surface conditions, where the driver
Normally hardly has a chance anymore to keep his vehicle on track through steering or braking maneuvers. Therefore, the system – compared with traction control – is always ready: While braking, during acceleration or when coasting.
As soon as skidding becomes imminent, the ESP Electronic Stability Program prevents it immediately. ESP continually monitors where the driver is steering and where the vehicle is actually going. When instability threatens, ESP selectively can brake each wheel individually, and intervene in the engine-management system. ESP stabilizes the vehicle and makes it more controllable in critical situations.
Advantages of ESP
1) Improves moving-off and acceleration capabilities by increasing traction; especially useful on road surfaces with different levels of grip and when cornering.
2) Improves active dynamic safety, since only a wheel which is not spinning can provide optimum traction with no loss of lateral stability.
3) Automatically adapts the engine torque to suit the ability of the wheels to transmit this to the road when the driver applies too much throttle.
4) Reduces the danger of traction loss under all road conditions by automatically stabilizing the vehicle during braking, acceleration and in spins.
5) Significantly improves the directional stability of the vehicle when cornering-up to the limit range.
.
CONCLUSION
. Numerous international studies have confirmed the effectiveness of ESC in helping the driver maintain control of the car, help save lives and reduce the severity of crashes. In the fall of 2004 in the U.S., the National Highway and Traffic Safety Administration confirmed the international studies, releasing results of a field study in the U.S. of ESC effectiveness. NHTSA concluded that ESC reduces crashes by 35%. The prestigious Insurance Institute for Highway Safety later issued their own study that concluded the widespread application of ESC could save 7,000 lives a year. That makes ESC the greatest safety equipment development since seat belts, according to some experts. Other manufacturers use electronic stability control systems under different marketing names:
ESP reduces the danger of spinning in curves, in avoidance maneuvers or during braking, and supports the driver in better controlling critical situations. “The current analysis of the accident statistics shows, that ESP makes an important contribution to accident prevention, and is therefore as significant for traffic safety as ABS, seat belt and airbag,”
Practice shows that vehicle dynamic control systems like ESP are capable of making skidding avoidable or at least increase control. With their widespread introduction a substantial decrease in the number of serious accidents could be expected.
RESIKO-Survey of GDV – General Association of German Insurance Companies)
REFERENCES
• ‘Robert Bosch GmbH-systems and products for Automobile Manufacturer.
• Vehicle stability enhancement systems-TRAXXAR
• ESP-Electronic stability program by Bosch.
• www.autoweb.com.au
• www.answers.com
INTRODUCTION
WHAT IS ESC
IMPORTANCE
COMPONENTS
WORKING
ADVANTAGES
APPLICATION
SUMMARY
Electronic Stability Program (ESP)
Electronic Stability Program (ESP) is an interactive high-tech safety system that significantly improves the stability of a vehicle in all driving situations .When starting off, When driving it self and when braking and so increases the driver’s chances of avoiding a potential accident. ESP helps keep the driver in control of the vehicle even in critical situations.
ESP is based on already familiar components; the anti-lock brake system (ABS) and traction control (TCS) and also include electronic brake power distribution (EBD) and engine drag torque control (EDC).
ESP is an active safety system which improves vehicle stability in all driving conditions. It operates by actuating the brakes individually on one or more wheels on the front or rear axle. ESP stabilizes the vehicle when cornering, braking ,or during non-driven coasting to keep it on the road and in the desired line.
ESP is a registered trademark of the Robert Bosch GmbH and used originally for Mercedes-Benz.
ESP compares the driver's intended direction in steering and braking inputs, to the vehicle's response, via lateral acceleration, rotation (yaw) and individual wheel speeds. ESP then brakes individual front or rear wheels and/or reduces excess engine power as needed to help correct understeer (plowing) or overseer (fishtailing). ESP also integrates all-speed traction control, which senses drive-wheel slip under acceleration and individually brakes the slipping wheel or wheels, and/or reduces excess engine power, until control is regained. ESP cannot override a car's physical limits. If a driver pushes the possibilities of the car's chassis and ESP too far, ESP cannot prevent an accident.
Stability control equipment is now generally known as electronic stability control or ESC, a category recognized by the Society of Automotive Engineers. Electronic stability control combines anti-lock brakes, traction control and yaw control (yaw is spin around a vertical axis). To grasp how it works, think of steering a canoe. If you want the canoe to turn or rotate to the right, you plant the paddle in the water on the right to provide a braking moment on the right side. The canoe pivots or rotates to the right. ESC fundamentally does the same to assist the driver.
The electronic stability program (ESP) is a further enhancement to the anti-lock braking system (ABS) and traction control system (TCS). The ESP is designed to detect a difference between the driver's control inputs and the actual response of the vehicle. When differences are detected, the system intervenes by providing braking forces to the appropriate wheels to correct the path of the vehicle. This automatic reaction is engineered for improved vehicle stability, particularly during severe cornering and on low-friction road surfaces, by helping to reduce over-steering and under-steering.
To implement ESP functionality, additional sensors must be added to the ABS system. A steering wheel angle sensor is used to detect driver input with a yaw rate sensor and a low-G sensor that measure the vehicle response. Some ESP systems include a connection to the powertrain controller of the vehicle to enable reductions in engine torque when required.
As soon as skidding becomes imminent, the ESP Electronic Stability Program prevents it immediately. ESP continually monitors where the driver is steering and where the vehicle is actually going. When instability threatens, ESP selectively can brake each wheel individually, and intervene in the engine-management system. ESP stabilizes the vehicle and makes it more controllable in critical situations.
Rapid intervention: briefly applied braking pressure keeps the car on track
One of the strengths of the Electronic Stability Program is the speed with which it works: the sensing of oversteer and understeer, and the automatic braking intervention, are all completed within fractions of a second. For example if the rear of the car starts to swing out when taking a corner too fast, the ESP microcomputer first of all reduces engine power, thus increasing the lateral forces at the rear wheels. If this is not enough to eliminate the skidding tendency, the system also applies the brakes to the outer front wheel. The braking counteracts the critical rotational movement and restores stability. The simultaneous reduction in speed has further benefits for safety
Fig 1. Under steering drive
When ESP corrects the vehicle , it is not a one-off event which is completed after a brief application of the brakes. The stabilization is an ongoing process which is continuously adapted to take account of situational changes in the dynamics of the vehicle, until the risk of skidding is eliminated. This adaptive control requires the sensors and actuators in the Electronic Stability Program to react and adapt with extreme speed. The system has to cope not only with fast lane changes or patches of black ice, it must also function whatever the carload or tyre tread depth.
ESP was developed and tested with the aid of the most advanced techniques available, which systematically evaluated all potential malfunctions. Using these techniques, every conceivable system error was analyzed and methods were developed to eliminate the risk of malfunction. Amongst other things, the individual ESP components carry out self-checking routines at regular intervals. For example the vitally important yaw sensor is checked each time it supplies information, at intervals of just 20 milliseconds. The switch disables ESP's capability to reduce the engine torque. It also reduces the ESP intervention threshold to about 20%
How do I know ESP is working?
ESP monitors the vehicle's response to the driver's steering and braking inputs to detect oversteer or understeer. If sensors detect that a skidding condition is developing, ESP brakes individual front or rear wheels and/or reduces excess power as needed to help keep the vehicle going in the direction the driver is steering.
ESP could be realized especially through the remarkable progress in modern microelectronics. Sensors constantly record the driver and vehicle behavior and send their data to an electronic control unit. It compares the current driving condition with an appropriate nominal condition for the respective situation, and thus detects impending swerving within fractions of a second. If the car deviates from the calculated “ideal line”, ESP intervenes according to a special logic, and helps to keep the vehicle on track through accurately proportioned brake impulses at the front and rear axle as well as a reduction in engine torque. Thus, the system helps to correct driving errors and swerving movements, that are caused by slipperiness, wetness, gravel or other adverse road conditions. The stabilization takes place permanently – within the physical limits – and adjusts to the vehicle movements caused by the respective situation.
The triangle in the center of the speedometer flashes when ESP intervenes; either with ESP switched on or off. It's a reminder to adjust your speed to the prevailing road conditions, usually by reducing it. If instead one "steps on it", with ESP ON, the engine power may be reduced to prevent a potentially critical situation.
Electronic stability program
The standard-fitted ESP system selectively applies braking forces to the front and rear wheels in such a way as to reduce the risk of skids and slides and help the driver maintain control in critical situations. The system extends the technology of the anti-lock braking and acceleration skid control systems with a range of additional sensors which are used principally to detect yaw motion.
The ESP computer continuously compares the actual behaviour of the vehicle with the computed ideal values. The moment the car deviates from the direction intended by the driver, specially developed control logic causes the system to intervene with split-second speed to bring the car back on track. It does this in two ways:
1)Precisely controlled braking at one or more wheels 2)Reducing engine power.
ESP in this way helps to stabilize the vehicle in critical situations.
Active safety systems help prevent accidents]
1) Antilock braking system ABS
2) Traction control system TCS
Antilock Braking System (ABS)
Laying the groundwork for stability control, in the mid-80s Bosch brought the antilock braking system (ABS) to market through Mercedes and BMW. As most consumers probably know by now, ABS has become a standard feature on many new cars. It works by sensing and preventing wheel lock-up, thereby improving the vehicle's traction and enhancing steerability during hard braking.
1) Prevents the wheels from locking and thus allows avoiding obstacles.
2) The vehicle remains under control even while braking on one-sided slippery road.
3) The stopping distance is usually shortened compared to locked wheels
Traction Control System (TCS)
1) Fast interventions in engine management and brakes prevent the driven wheels from spinning
2) Safe drive off is possible even on one-sided slippery road
3) TCS prevents the vehicle from skidding when accelerating too much in a turn.
What does ESP do?
ESP actively enhances vehicle stability (staying in lane and in direction)
1) Through interventions in the braking system or the engine management.
2) To prevent critical situations (i.e. skidding), that might lead
to an accident
3) To minimize the risk of side crashes.
What is so special about ESP?
ESP watches out:-
1) Surveys the vehicle’s behavior (longitudinal and lateral dynamics)
2) Watches the driver’s commands (Steering angle, brake pressure, engine torque)
3) Is continuously active in the background.
ESP knows:
Recognizes critical situations –in many cases before the driver does
Considers the possible ways of intervening:
Wheel-individual brake pressure application Intervention in the
engine management.
Frequent cause for accidents:
The driver loses control of his vehicle. I.e. through
1) Speeding
2) Misinterpretation of the course or the road condition
3) Sudden swerving.
25% of all accidents involving severe personal injury are caused by skidding.
(Source: GDV – General Association of German Insurance Companies)
60% of all accidents with fatal injuries are caused by side crashs. These side crashs are mainly caused by skidding because of excessive speed, driving errors or excessive steering movements Source: GDV – General Association of German Insurance Companies)
What are the components of ESP?
The Bosch ESP components:
1) Hydraulic modulator with attached ECU
2) Wheel-speed sensors
3) Steering-angle sensor
4) Yaw-rate and lateral acceleration sensor
5) Communication with engine management.
First generation: complex hydraulic system develops braking pressure
The central control unit of the first-generation ESP system comprised two microprocessors, each with 48 kilobytes of memory, and the hydraulic system consisted of a pressurising pump, a charge piston and a central hydraulic unit. The pressurising pump was required for fast and reliable development of braking pressure under all temperature conditions. The hydraulic unit distributed the pressure individually to the wheels
WORKING
The heart of ESP is a yaw velocity sensor which resembles the ones used in aircraft and space vehicles. Like a compass, it constantly monitors the exact altitude of the car and registers every incipient spin. Other sensors report how high the current brake pressure is, what the position of the steering wheel is, how great the lateral acceleration is, what the speed is and how big the difference in wheel speed is. Whenever handling becomes instable, the necessary commands are executed and the vehicle is brought under control in a fraction of a second.
The ESP continuously compares the actual driving condition conveyed by the sensor with the driver’s intention. The software recognizes where the driver wants to steer via the anti-lock brake system sensors from the speeds of the four wheels and by the steering movement. This is recorded by another sensor mounted on the steering column. If there is now a difference between the driver’s intent determined in a vehicle model inside the computer and the current driving condition, which cannot be compensated without increased effort by the driver, ESP becomes active—upto 150 times per second. If there is a tendency to understeer, ESP brakes the wheel on the inside of curve as a priority and brings the car back on to the desired course. If the vehicle is oversteering toward the edge of the road, ESP brakes the outside of the car generating a moment opposed to the yawing moment that results in a driving condition that is easy to correct.
ESP works according the principle of an “observer”: Sensors acquire driver and vehicle behavior, send their data to a powerful microcomputer, that is loaded with a mathematical model. Thus, the actual state of the vehicle is compared with a nominal state appropriate for the respective situation, and impending swerving is detected.
From a physical standpoint, that swerving is nothing else than the turning of the vehicle about its own vertical axis. The faster that turning takes place, the greater the swerving movement and accident risk. Only: For the reliable measurement of the rotational speed, a complicated system was required, which then only existed in aerospace technology: a so-called turning or yaw rate sensor, which was too susceptible to breakdown and too expensive for automotive applications
however. Therefore, the experts developed a comparable measuring element for the stability program, which consists of a small hollow cylinder made of steel. Quartzes excite it to defined vibrations,that shift through the rotational movement of the vehicle. To compensate for that shift, an electrical potential was needed, whose value is the measuring signal for the rotational speed of the car.
In addition to the rotational speed, the ESP computer processes additional sensor information through the respective desire of the driver and the actual behavior of the vehicle:
The steering angle sensor measures the turn-in of the steering wheel and thus acquires where the driver wants to go.
The wheel speed sensors register the speed of the vehicle and the tendency of the wheels to loose adhesion with the road surface.
The lateral acceleration sensor detects when the vehicle is drifting off laterally.
The rotational speed sensor is the heart of the electronic stability program. It measures the rotational movement of the vehicle. If it deviates from the calculated “ideal line”, impending swerving is detected.
The admission pressure sensor detects the brake pressure input by the driver. In addition, the ESP ECU is connected with the engine and automatic transmission per CAN data bus (Controller Area Network), so that it also receives the current data about engine torque, gas pedal position and gear ratio at any time.
Constant Control: ESP Is Ready in Any Driving Situation
While driving, the ESP computer constantly compares the actual vehicle behavior with the programmed nominal values. If the vehicle deviates from the safe “ideal line”, the system intervenes with lightning speed according to a specially developed logic, and can bring the vehicle back onto the right course in two ways: Through exactly measured out brake impulses at the front or rear axle, and through reduction of the engine torque. Here, within the limits of physics, ESP corrects driving errors as well as swerving movements caused by ice, wetness, gravel or other adverse road surface conditions, where the driver
Normally hardly has a chance anymore to keep his vehicle on track through steering or braking maneuvers. Therefore, the system – compared with traction control – is always ready: While braking, during acceleration or when coasting.
As soon as skidding becomes imminent, the ESP Electronic Stability Program prevents it immediately. ESP continually monitors where the driver is steering and where the vehicle is actually going. When instability threatens, ESP selectively can brake each wheel individually, and intervene in the engine-management system. ESP stabilizes the vehicle and makes it more controllable in critical situations.
Advantages of ESP
1) Improves moving-off and acceleration capabilities by increasing traction; especially useful on road surfaces with different levels of grip and when cornering.
2) Improves active dynamic safety, since only a wheel which is not spinning can provide optimum traction with no loss of lateral stability.
3) Automatically adapts the engine torque to suit the ability of the wheels to transmit this to the road when the driver applies too much throttle.
4) Reduces the danger of traction loss under all road conditions by automatically stabilizing the vehicle during braking, acceleration and in spins.
5) Significantly improves the directional stability of the vehicle when cornering-up to the limit range.
.
CONCLUSION
. Numerous international studies have confirmed the effectiveness of ESC in helping the driver maintain control of the car, help save lives and reduce the severity of crashes. In the fall of 2004 in the U.S., the National Highway and Traffic Safety Administration confirmed the international studies, releasing results of a field study in the U.S. of ESC effectiveness. NHTSA concluded that ESC reduces crashes by 35%. The prestigious Insurance Institute for Highway Safety later issued their own study that concluded the widespread application of ESC could save 7,000 lives a year. That makes ESC the greatest safety equipment development since seat belts, according to some experts. Other manufacturers use electronic stability control systems under different marketing names:
ESP reduces the danger of spinning in curves, in avoidance maneuvers or during braking, and supports the driver in better controlling critical situations. “The current analysis of the accident statistics shows, that ESP makes an important contribution to accident prevention, and is therefore as significant for traffic safety as ABS, seat belt and airbag,”
Practice shows that vehicle dynamic control systems like ESP are capable of making skidding avoidable or at least increase control. With their widespread introduction a substantial decrease in the number of serious accidents could be expected.
RESIKO-Survey of GDV – General Association of German Insurance Companies)
REFERENCES
• ‘Robert Bosch GmbH-systems and products for Automobile Manufacturer.
• Vehicle stability enhancement systems-TRAXXAR
• ESP-Electronic stability program by Bosch.
• www.autoweb.com.au
• www.answers.com
Subscribe to:
Posts (Atom)