Process analyser may be defined as an unattended instrument that continuously or semi-continuously monitors a process stream for one or more chemical components Most process analysers operate on the same basic principles as their laboratory counterparts but with the addition of mechanisms and circuitry to perform the required analysis unattended and to present resulting data as desired. In addition, process analysers must be housed both to comply with electrical standards and for protection from weather and physical abuse.
Process analysers may be classified in various ways depending upon the purpose of the classifications. Some classifications are made (a) by operating principle (infrared, ultraviolet, chromatographic, etc.), (I) by type of analysis: (oxygen, carbon dioxide, etc.) and (C) by selection-selective non-selective (infrared may be sensitised to monitor only one component while a chromatograph may monitor several components).
In application, process analysers are always calibrated empirically. On standard samples prepared and analysed by the laboratory for this reason, process calibration can be no better that the laboratory analysis since the errors of both units is compounded. Repeatability, however, is superior for process units since errors due to human variances and ambient conditions are virtually eliminated following are discussions of the most frequently encountered analysers in the processing, industries.
- Oxygen Analyser (O2)
The universal demand for oxygen analysis due to its essential role in oxidation, combustion and industrial processing applications has led to a large number of varied techniques applied to process analysers The recent intense interest in the ecological field probably will lead to further advances, especially in the analysis of dissolved oxygen. The most widely used methods of oxygen analysis are the deflection paramagnetic, thermal paramagnetic, catalytic combustion, microfuel cell and galvanic cells.
1.1. Deflection Paramagnetic:
As the name implies, the deflection paramagnetic method of analysis is based on the strong attraction of oxygen by magnetic fields while other gases are less attached or even repelled. Paramagnetic susceptibility is measured directly by determining the change of the magnetic force acting on a test body suspended in a non uniform magnetic field while the test body is surrounded by test gas, if the sample gas is more paramagnetic than the test body, the test body is repelled from the field of maximum flux density. If the sample gas is less paramagnetic, the test body is drawn toward the field. The test body made up of two hollow glass spheres on a bar which, is supported on a torsion fibre. A small mirror is mounted at the center of the bar to reflect light from a source to a light-dividing mirrors, The mirror reflects the light to two photo tubes whose outputs are proportional to the reflected light incident on their surfaces, with no oxygen present,
the reflected light is equal on each photo-tube, and their outputs are equal. In the presence of oxygen, however, the test body is rotated on its fibre suspension, the amount of light reflected to the photo-tubes is unequal, and their outputs are unequal. Since the difference in the phototube outputs is proportional to the difference between the magnetic susceptibilities of the test body and the gas which, the test body displaces, the imbalance is then a linear function of oxygen concentration. The photo-tube imbalance is amplified and fed back to the test body to control its potential. Figure 1 shows a null balance circuit with a motor driven potentiometer, which supplies a variable electrostatic potential to the body, and the body is held in null position. Any change in oxygen concentration is followed by subsequent change in electrostatic force and null conditions are achieved. Recorded output is taken from the DC potential required to maintain null position.
Close temperature control is required for high accuracy, but this is achieved by using low sample flows and allowing the gas to reach the temperature of the analyser. Sample gas flow above 100 F, however, must be cooled before entering the analyser.
The sensing unit designed with internal shock mounting to protect the test body suspension system and the optical mirrors. Automatic self-standardisation is available. Periodic calibration checks consist of checking zero with an oxygen-free standard and using standard sample of known concentration to check one point of the linear curve.
1.2 Thermal Paramagnetic:
The thermal paramagnetic analyser combines the principles of the paramagnetic susceptibility of oxygen with the thermal conductivity principle of TC cells. This method avoids the use of light beams, reflective lenses, torsion fibres, diaphragms and moving parts that must be kept in critical adjustment.
Figure 2 depicts a thermo-magnetic cell which is merely a thermal conductivity cell (with whetstone bridge circuitry) equipped with a magnet to provide magnetic flux in two cells as shown, the filaments are cooled equally, and there is no measurable difference in their resistance. However, when a magnet is moved into position so that one filament is located in a region of high magnetic flux, oxygen, which is paramagnetic, is concentrated in that region, displacing the other gases.
As the oxygen is heated by the element, it loses its magnetic susceptibility inversely as the square of the temperature. The heated oxygen is displaced by cooler oxygen with higher magnetic susceptibility. The induced flow of oxygen gas past the filament cools, and changes its resistant, thereby upsetting the electrical balance of the bridge measuring circuit. This imbalance is proportional to the effective magnetic susceptibility of the gas and, hence, to the amount of oxygen present.
Calibration of this analyser is dependent upon constant temperature and pressure conditions. Temperature changes result in resistance variations of the thermal conductivity element, which, in turn, changes the output of the bridge measuring circuit. This error is prevented by case temperature control.
Changes in the sample stream or atmospheric pressure affect the flow rate through the cell and therefore increase the conductivity of heat away from the thermal conductivity element. One method of compensating for changes in atmospheric pressure, shown in figure 1, makes use of two compensating cells of different diameters. Since both cells are exposed to atmospheric pressure through a common port, the larger cell contains a larger volume of air and is more strongly influenced by convection from pressure change tan is the smaller cell. By precise sizing circuit to offset the simultaneous change in the sample cell due to the atmospheric pressure change.
Calibration of this type analyser is extremely easy. Zero is checked by swinging the magnetic away form the measuring cell. The calibration is completed by checking one upscale point. If the range spans the concentration of oxygen in air (20.7%) air is passed through the cell and the analyser is adjusted to read 20.7%. If the range is lower than 21%, air may still be used by switching in a calibrating resistor.
1.3 Catalytic combustion:
The catalytic-combustion analyser measures oxygen indirectly by measuring, the heat content of an oxidising fuel. This is accomplished by adding hydrogen to the oxygen containing gas sample and passing the mixture over a heated noble-metal filament which causes combustion to take place. The filament assembly in figure 3 consists of two noble-metal thermal conductivity filaments mounted in separate compartments of a sample cell the measuring filament is covered with a screen so that it is fully exposed to the hydrogen-oxygen gas mixture. The compensating filament compartments closed except for a small hole which permits a small amount of the mixed gas to diffuse in, compensating for temperature and normal conductivity variations. Combustion takes place in each chamber due to the temperatures of the filaments. The rise in temperature increases the resistance of the measuring filament proportional to the amount of oxygen present in the mixed gas stream and causes an imbalance in the whetstone bridge signal circuit. Calibration check is made simply by flowing samples of known oxygen content through the analyser and making adjustments to the recorder readings.
Few process streams are free of combustible gases, and in a combustible analyser, these gases burn at the same time with the oxygen. They must be measured and accounted for in the 02 analysis. The same cell configuration used for the hydrogen sample gas mixture is also used to burn the combustibles in the sample. In this cell, however, air is mixed with the sample gas instead of hydrogen. In fact, both cells may be built into the same block with one reading oxygen content and the other reading combustibles content.
1.4 Microfuel Cell:
One analyser is available which operates on a patented electrochemical transducer with unique features. The cell is sealed disposable disc, specific to oxygen, which generates current flow in the presence of oxygen. In operation, oxygen diffuses through a Teflon membrane and is reduced on the surface of the cathode. A corresponding oxidation occurs at the anode internally, and a current is produced which is proportional to the concentration of oxygen. In the absence of oxygen, no current is produced; therefore, no zero calibration is required. The cell output is very stable, and the useful life of the cell, depends upon the length of time and magnitude of oxygen exposure. One of tile attractive features of this type unit is its low maintenance cost. The cell is merely replaced periodically by inserting a new one.
- Moisture Analyser
The DewPro MMY 30 trace moisture transmitter is a loop-powered dew point measuring device. The transmitter includes a sensor element, a flow chamber, a weatherproof enclosure, microprocessor electronics, and assorted fittings all in a compact assembly. In most cases either the inlet or outlet-port includes an orifice to regulate the flow. The placement of this orifice determines whether the dew point measurement is done at process (line) pressure (outlet orifice), or at atmospheric pressure (inlet orifice).
2.1 Theory or Operation:
The DewPro MMY 30 microprocessor controlled electronics operate with a DC voltage supply from 9 to 32 V DC. At the nominal 24 V DC supply, the maximum loop resistance is 750 The signal is represented by the 4 to 20 mA loop current and is directly proportional to the dew point range in °C or °F. In the standard range. 4 mA corresponds to -90 °C (-130 °F) and 20 mA to + l0 °C (+50 °F) dew point temperature.
In dryer applications the moisture sensor performs best when mounted in a bypass. The built-in bypass of the DewPro eliminates costly hardware associated with traditional sampling methods. The DewPro installs simply into the process with its G1/2 or 1/2 MNPT threaded connection.
The heart of the MMY 30 is the new planar sensor element. It incorporates a new superior aluminium oxide sensor that provides longer calibration stability, excellent
corrosion resistance, and improved speed of response. The sensor, mounted on a ceramic substrate, also has a reduced temperature coefficient.
Each DewPro is factory calibrated against precise NIST certified moisture references and has accuracy of ±2 °C dew point. The field calibrator connects to the DewPro on calibration data automatically.
Long-term testing of the planar sensor, the heart of both the DewPro® and the DewCamp™, has shown that a two-point field re-calibration can provide sufficient accuracy. The calibration is performed at a high dewpoint (ambient air) and a low dewpoint (process gas) using the DewComp™ sample chamber and the procedure outlined below. These two points are used to adjust the calibration data for the DewPro® over its full range.
- The loop current is outside the range of 2-24 mA, as shown on display or current meter.
- a) The process dew point is out of range If the dew point is above + 10 °C (150 °F),
the current will go above 22 mA. Apply dry air for 20 minutes.
- b) If the dew point is below -90 °C (-130 ºF), the current will go below 4 mA. The
cause may be a defective sensor assembly or an electronics malfunction.
- There is no current, check voltage and polarity across +/- terminals with a DC
voltmeter. If the voltage is within 9-32 V DC, consult the factory.
- Response time is very slow, verify flow with an air flow meter. If the orifice is at
the outlet of a 7 to 8 bar (=100 psig) process, the air flow should indicate 20 to 30
1/h (500 cc/min., 1 cfh) If the flow is dramatically lower, file inlet filter may be
clogged. Remove the filler and clean it with a solvent or replace it.
- Total Free Chlorine (TFC) Analyser
3.1 TFC Sensor
The Model 499A CL is designed for continuous measurement of free residual chlorine over the entire range of 0 – 20 ppm. Chlorine in aqueous solutions is used in industrial and municipal applications for a number of purposes such as disinfection, taste and odor control, and bleaching. In addition, chlorine is used as a powerful oxidising agent in various processes. Applications include potable water treatment and process water for pharmaceuticals.
The Model 499A CL consists of a platinum cathode, silver anode, wood junction, and a proprietary microporous membrane. The electrolyte is a saturated KCI/AgCl solution. As free residual chlorine diffuses through the membrane, an electrochemical reaction takes place at the cathode and anode in the presence of the electrolyte. The resulting current flow is proportional to the amount of chlorine diffusing through the membrane.
3.1-1 Sensor Maintenance
Sensor maintenance consists of setting up a preventative maintenance schedule. The sensor should be removed from the process periodically in order to keep the sensor clean and recharged and to replace the membrane.
BEFORE REMOVING THE SENSOR, be absolutely certain that the process pressure
is reduced to 0 psig and the process temperature is lowered to a safe
3.1-2 Preventative Maintenance Schedule
For best results, the sensor should be examined at periodic intervals to determine the cleanliness of the sensor and the membrane. To determine this interval period, examine sensor after seven days service and then for progressively longer periods until re-calibration or cleaning of the sensor is required.
3.1-3 Sensor Cleaning
Normally the sensor can be cleaned by washing with clean water. Make sure the membrane area is kept clean of any accumulation of the process such as dirt, fungus, algae, hair, etc.
Make sure the membrane is not damaged during cleaning.
3.1-4 Membrane Replacement
To replace the membrane assembly:
1) Hold sensor with membrane facing up.
2) Unscrew membrane retainer and lift out membrane assembly.
3) Check cathode for nicks or tarnishing by sanding the platinum cathode with 400-
600 grit sandpaper, in one direction only until shiny
4) Rinse cathode with appropriate Amperometric (#1 DO, # 3 Ozone, or #4 FRO) fill
- Place new membrane assembly in retainer and fasten to the sensor body.
3.1-5 Sensor Recharging
To recharge the sensor:
Remove membrane assembly.
- Hold sensor over a container or sink with the sensor tip facing down.
- c) Loosen the electrolyte fill plug and allow the fill solution to flow out of the small
holes surrounding the cathode.
If small holes are clogged, remove salts by soaking in hot clean water.
- d) Place sensor in a horizontal position and fill the sensor with the appropriate
amperometric fill solution (#1 DO, # 3 Ozone, or #4 FRO) to the top of the
electrolyte fill hole.
- e) Replace fill plug into electrolyte fill hole and position the sensor body vertically.
Loosen plug slightly to allow enough fill solution to flow out of small holes to
insure air has not been trapped around the cathode post.
f) Place new O-ring into body groove, put new membrane assembly in membrane retainer and assemble to sensor body.
g) Return sensor to horizontal position and refill with amperometric fill solution.
Tap sensor to eliminate any trapped air bubbles and fill cavity until
solution is to the top of the fill hole.
- h) Wrap fill plug with two turns of teflon tape and install in electrolyte fill hole. Tighten plug until flush with sensor body. Do not attempt to over tighten, instead, add more tape to prevent leaks
i) The sensor will require up to 12 hours to equilibrate to the polarising voltage after recharging.
3.1-6 Chlorine Operational Bench Testing
- With the 499A CL attached to the analyser, immerse the 499A CL sensor in a
beaker of de-ionised or distilled water and add one ml (cc) of 7 pH buffer. The
water must be flowing (by means of a magnetic stirrer or other type of mixing).
The stabilisation period with power applied to the analyser may take up to two
Check the 499A CL sensor output current (to find sensor output current, refer to appropriate section in analyser manual). The current should be below 20 nanoamps.
Calibrate for sensor zero.
Add one drop of bleach and observe an increase in the reading.
Analyse the sample by titration or any other means available and check sensor output. It should be at least 75 nanoAmps per ppm (mg /L). If unable to get this correlation, remove the membrane and use 400 or 600 grit sandpaper to lightly sand the platinum cathode. Once the membrane has been removed, a new replacement membrane is recommended.
If the current meets the above requirements, return the sensor to process. Perform further calibrations if required by the appropriate analyser instruction manual.
3.2 Total Free Chlorine Microprocessor Analyser (Model 1054A)
Measures Total Free Chlorine which commonly exists as a pH and temperature dependent mixture of HOCI and OCI. The Model 1054A TFC provides both automatic temperature compensation and automatic or manual pH correction for accurate measurement of Total Free Chlorine. Automatic pH correction is recommended for applications over pH 7 that may change pH by more than .2 units. The patented* Total Free Chlorine sensor is an amperometric sensor that eliminates the need for maintenance intensive wet chemical analysis.
The Model 1054A TFC is ideal for use in applications that do not contain species that bind chlorine such as ammonia or some organic. Ammonia reacts with chlorine to form Total Residual Chlorine (TRC) and requires the use of a different sensor.
3.2-1 TFC Analyser Configuration
Set Mode “SEt”. Most of the analyser’s configuration is done while in the Set Mode. All menu variables are written to the analyser’s EEPROM (memory) when selected and remain there until changed. As these variables remain in memory even after the analyser’s power is removed, the analyser configuration may be performed prior to installing it.
Make sure the analyser loop is properly wired Power up the analyser. Only power input wiring is required for analyser configuration The Analyser’s display will begin showing values and/or fault mnemonics. All fault mnemonics will be suppressed while the analyser is in Set Mode (the fault flag will continue to blink).
Enter Set Mode. Pressing the ACCESS key twice will place the analyser in Set Mode. The display will show “SEE” to confirm that it is in Set Mode. It will then display the first item in the set menu “RPH”. The analyser is now ready for user configuration.
If “LDI” displays, the Keyboard Security Code must be entered to access the Set Mode. To get
out of the Set Mode, press the TFC key Refer to the Configuration Worksheet on page 19 for
the analyser ranges and factory settings.
Configuration Worksheet. The Configuration Worksheet provides the range of the various functions, the factory settings, and a column for user’s settings. As you proceed through the configuration procedures for each function of the analyser, fill in the appropriate information in the “USER” column. The configuration may be done in any order. However, it is recommended that it be done in the order as shown on the worksheet.
Automatic pH. Display Mnemonic “APH”. This function is used to manually or automatically, with a pH electrode, adjust the total free chlorine reading corrected to the pH value of the process. If the pH board is not installed, “not” will appear when “APH” is pressed and the analyser will operate in manual pH mode only.
Analyser Zero. Display Mnemonic “-0-”. This function is used to zero the analyser/sensor loop.
Sensor Input. Display Mnemonic “in” This function is used to display current input from the sensor.
3.3 TFC Analyser Start-up and Calibration
The start-up and calibration procedures must be performed only after the installation and Configuration have been properly carried out .
The start-up procedure for the1054A TFC involves the configuration of the analyser to your particular process requirements and logging the various set-points in the user column of the Configuration Worksheet. Also involved is the complete polarisation of the TFC sensor.
When the analyser is powered up, a polarisation voltage is applied between the anode and
the cathode. The sensor (electrode) current is initially very high, but then it falls off
quickly and settles down to a steady state after a few hours.
It is recommended to leave the analyser powered up to allow the sensor to be polarised while preparing for calibration or while undergoing routine maintenance. Sensor life will not be shortened under these conditions because only a very small current flows through the sensor. If for any reason the sensor has to be disconnected (or the analyser switched off) the sensor will have to be polarised before it can be ready for further operation.
3.3-2 Temperature Calibration
For accurate temperature compensation and temperature readings, the TEMP function of the analyser must be calibrated.
- Place the sensor in a container filled adequately with process sample or any
Place a calibrated temperature reading device in the sample container.
- Allow the readings to stabilise
- When the readings are stable, compare the Analyser’s reading to that of the calibrated temperature indicating device.
If the analyser’s reading requires adjusting, follow these steps:
- Press the TFC key to ensure that the analyser is not in Set Mode.
- Press the TEMP key once. “°F” or “°C“ will show briefly, then the present temperature is displayed in either ºF or ºC (depending on the unit selected in the t-C, d-t menu)
- 3. SELECT to adjust the value. The display will acknowledge briefly with “AdJ” followed by the Numeric Display with right digit flashing.
- SCROLL (t) and SELECT to display the desired correct temperature.
- Press ENTER. “°F” or “°C“ will show briefly, then the desired temperature is displayed
3.4 Analyser zero
This procedure is required to zero the analyser/sensor loop. The detailed procedure in the attached document.
3.5 TFC Sample Standardisation
The TFC sensor must be standardised using a grab sample or a known TFC sample. A known TFC sample can be made by adding 2-3 drops of chlorine bleach to one(1) liter of de-ionised water. TIP: The addition of l0 ml of 5-7pH buffer will help stabilise readings. Single point calibration (ppm) is done only with the TFC key.
For accurate calibrations the recommended minimum chlorine (ppm) concentration, of the
process or sample is 1.0 ppm.
1) Place the sensor in process, grab sample or a known TFC sample. Allow it to If
checking in a beaker solution must be stirred magnetic stirrer is recommended. If
sensor is in the process, a minimum recommended flow is I foot per second over
the sensor membrane.
- When the analyser’s reading is stable, note the reading. Perform a chemical
analysis of the process or grab sample as quickly as possible The Model DPD-50
chlorine analysis kit is available through your Rosemount representative
- Note the current TFC reading. If it has not changed from the time the sample was
taken, standardise the instrument/sensor loop to the value obtained from me
chemical analysis as follows
- Press the TFC key to make sure it is not in the Set Mode.
- “Std” appears briefly and then the last total free chlorine value is
displayed with the right digit flashing.
- C. SCROLL (t) and SELECT to display the true value from the chemical analysis.
- D. Press the ENTER The analyser is now standardised.
3.6 TFC Theory of Operation
The Model 1054A Total Free Chlorine Analyser automatically and continuously measures concentrations of Hypochlorous acid and Hypochlorite ion in water or aqueous solutions. The determination is based on the measurement of the electrical current developed by the TFC sensor in contact with the sample.
The Model 1054A TFC is sensitive enough to measure 0-20 mg/l (ppm) of total free chlorine.
The sensor is basically composed of a gold cathode and a silver/silver chloride reference electrode which serves as the anode. A proprietary, polymetric, microporous membrane, is the point at which the chlorine in the sample enters the sensor. An external voltage is applied across the anode and the cathode, causing the cathode to be polarised. At this voltage, only sufficiently strong oxidising agents, such as HOCI and OCI will pass through the membrane and react at the gold cathode. The cathode reduction reaction is:
Chlorine + Electrons Chloride
Cl2 + 2e– 2 Cl–
To complete the flow of current through the sensor, an oxidation reaction occurs simultaneously at the silver anode:
Silver + Chloride – Silver Chloride + Electrons
Ag + Cl– AgCl + e–
The resulting current flow between the electrodes is directly proportional to the amount of total free chlorine in the sample solution. This current is detected by the analyser and converts it into mg/l (ppm) of total free chlorine.
The sensor requires a continuous flow of fresh sample at the sensor tip. The molecules of chlorine are consumed in the electrochemical reaction. These chlorine molecules must be replaced by new ones in the process flow. The minimum recommended flow is 1 foot per second over the sensor membrane.
3.7 TFC. Analyser Diagnostics
The Model 1054A TFC Analyser has a diagnostic feature which automatically searches for fault conditions that would cause an error in the measured total free chlorine, pH values and temperature values. If such a condition occurs, the current output and relays will act as configured in default and the fault flag and display will flash. A fault code mnemonic will display at frequent intervals. If more than one fault condition exists, the display will sequence the faults at eight second intervals. This will continue until the cause of the fault has been corrected. Display of fault mnemonics is suppressed when in Set Mode. Selecting the “SHO” item will display a history of the two most recent fault conditions unless “SHO” was cleared.
If the analyzer is in hold and a fault occurs, the mnemonic “HLd” will display during
the fault sequence.
3.7-1 Fault Mnemonics
The fault mnemonics and describes the meaning of each listed in tables in the attached documents. If the fault mnemonic begins with an apostrophe “’EEP” the fault refers to the pH electronics board.
The Model 1054A TFC Analyser is designed with state-of-the-art microprocessor circuitry, making troubleshooting simple and direct. Subassembly replacement, i-e- printed circuit board replacement, is all that is usually required.
Turbidity analyser measuring system consists of:
- Turbidity sensor system CUD 3
- Mycom CUM 121 / 151 Measuring transmitter.
4.1 Turbidity Sensor
The CUD 3 sensor is suitable for continuous measurement of turbidity in liquid media. Turbidity measurement is a method to determine the concentration of undissolved constituents in water or suspended solids or substances (emulsions). The turbidity measuring method has been internationally standardised (DIN 27027 and ISO 7027).
Typical areas of application are, for example:
- Monitoring of waste water treatment plant effluent according to the regulations
for plant operator self-control Monitoring of surface waters
- Monitoring of waste disposal site or seep water
- Flocculation monitoring in water treatment
- Filter effluent and filter leakage monitoring
- Sludge concentration measurement.
- Monitoring of phase separation processes
4.2 Turbidity Instrument
The Mycom GUM 121 and 151 are microprocessor-based measuring and control instruments used to determine turbidity in liquid media. These instruments can be easily adapted to all turbidity measuring tasks.
They may be adapted to all measuring tasks under a wide range of environmental conditions.
Typical areas of application are:
- Sewage treatment plant effluent monitoring
- Water treatment
- Monitoring of public waters
- Industrial water treatment
- Sludge concentration measurement
- Drinking water monitoring
4.2-1 Turbidity Function
The excitation beam emitted by an infrared transmitter hits the medium to be measured at a defined aperture angle. The differences in light refraction between the entrance window and the medium water are taken into account. Particles in the medium to be measured produce a scattered radiation, which reaches a scattered light receiver at a specific aperture angle.
4.3 Measuring method
Turbidity sensor CUS 1
The 90-degree scattered light method according to ISO 7027 / DIN 27027, working with a measuring wavelength in the near-infrared range (880nm), guarantees turbidity value acquisition under standardised, reproducible conditions.
Turbidity sensor CUS 4
The excitation radiation is alternately emitted into the medium to be measured at a defined angle by 2 infrared transmitters. The particles in the medium generate scattered light, which is received by 3 scattered light receivers at different but defined angles. Different path lengths and measuring angles result in different scattered light signals.
The optimised arrangement of transmitters and receivers produces a family of curves
representing measuring signals which permits to produce an output signal proportional to the solids concentration. This principle allows window soiling and changes in the intensity of the transmitting diodes to be effectively detected and taken into account when computing the result. The optimised transmitter/receiver arrangement results in very different measuring signals, permitting an extremely exact determination of the solids concentration.
Turbidity units NTU and ppm
The unit of measurement NTU (nephelometric turbidity units) corresponds to one formazine turbidity unit (i.e., I TE / F = 1 FTU = 1 NTU) in standardised 90-degree turbidity measurement
1 ppm (parts per million) is identical to a concentration of 1 mg per litre.
4.4 Turbidity Analyser Calibration
When to calibrate and frequency of calibration?
The turbidity measuring system must be calibrated
– before first-time operation
– following replacement of sensor
- Other times:
Periodically at intervals of approx. 1 year or
– depending on experience and
– environmental conditions
4.4-1 Calibration of sensor characteristic
Select the calibration type and measuring range according to the measuring task at hand. Always calibrate in the measuring range selected,
Calibration with factory calibration data:
Wet calibration values determined at the factory with the aid of zero solution and formazine are entered.
- For example, for measurement in drinking water.
- If the results of the measurement are to be reproducible and comparable, and the factory calibration points 0 / 2.000 / 8.00 / 40.00 NTU have been assigned to the application range.
Calibration with standard solution or user-specific samples:
- Re-calibration of sensor system
- The undissolved constituents of the water are to be measured, rendering
absolute values referred to the selected calibration standard. See attached
documents for calibration procedure.
Deposits on the sensor optics may result in inaccurate measurement. Therefore the sensor must be cleaned at regular intervals. These intervals are specific to each installation and must be determined during operation. Clean the optics with the following agents depending on the type of soiling:
Clean the sensor mechanically using a soft brush. Then rinse thoroughly with water.
|Type of soiling||leaning agent|
|Limestone deposits||Short treatment withcommercialde-liming agent|
|Other types ofsoiling||Remove withwater and brush|
|Oily and greasysoiling||Cleaning agentsbased on water-soluble surfactants(i.g., householddish detergents)|
- Gas Chromatograph
Gas chromatography may be defined as a method of physically separating and quantitatively identifying two or more components of a mixture by distributing them between two phases, one phase being a stationary bed of adsorbent surface anal the other phase a carrier gas which percolates through and along the stationary bed.
5.1 Operating Principle
As shown in figure 4, the carrier gas flows continuously through the column at a constant rate. With the injection of a vapor sample into the carrier stream, the components and the sample are mixed with the carrier and adsorbed into the stationary bed.
The bed substrate has an affinity for each component in the sample, and since this affinity varies with each component, generally the lighter components are swept away (eluted) faster than the heavier. As a result of this action down the entire length of the column, the lighter components reach the end of the column before the heavier ones.
By use of the proper column length, the components are separated from each other, and the individual elution time serves to identify the particular component.
Chromatograph consists of two basic sections: analysing and control. Normally the analyser section is located in the field near the sample point (or points), and the control section is located remotely, usually in a central control room.
For discussion purposes, the analysing section may be subdivided into smaller units: valves, columns and detectors. The control section consists of programmer, recorder and auxiliary units, including stream selector and peak-picker memory unit.
5.2 Analysing Sections
5.2-1 Sample Valves
Chromatographic valves serve two basic functions:
a) To introduce a fixed volume of sample into the carrier stream (sample valve), and
b) To switch segments of the column out of and into the carrier stream (column
Several types of valves are available. Figures 5 and 6 depict the varied construction of the four most popular types-namely, slide, spool, rotary and flexible-diaphragm. The slide valve has are rectangular shaped Teflon block which slides between two stainless steel plates. Three holes through the slides permit the flow of sample and carrier gas.
The diameter of the center hole is calculated to furnish a predetermined volume of sample. When the valve is actuated by air pressure on the diaphragm, the slider moves the sample volume hole from the sample stream to the carrier stream, thereby injecting the sample into the column. The spool valve has a round shaft which, slides when pushed by the diaphragm. Three grooves are machined around the shaft to permit flow between side fittings. O-rings isolate each groove to form a compartment (volume is moved into the carrier stream and injected into the column). The rotary valve has the sample volume machined into the base section of the valve. When actuated, the base section rotates 90 and places the sample volume in the carrier stream. Models of this valve are available with either air or electrical drive.
The diaphragm valve is constructed in two sections. The top section has holes drilled from the six fittings to the lower surface of the block. Between each hole on the lower surface is a concave “dimple” to permit flow, between the holes. The lower block has six holes drilled to its top surface, and these are spaced to match the “dimples in the upper block. A flexible diaphragm is placed on a hole in the bottom sections, the flexible diaphragm moves upward at that point and closes the “dimple” thus stopping flow between the two ports. In the valve “on” (sample) position, three alternate “dimples” are blocked with a resultant flow pattern as shown in figures 5 and 6. In the valve “off” (normal) position, the remaining three “dimples” are blocked. High quality is essential in a sample valve because it must be leak-proof, both to the outside and between valve ports, while transporting liquid or gas streams which, are sometimes in a high-pressure and high-temperature state. The valve must operate thousands of times without failure and be so constructed as to permit field maintenance and repair.
5.2-2 Column Switching Valves
In making some analyses, columns are sometimes used that are affected by other stream components (moisture or oils for example in which there is no interest. These other components can damage the column by plating or washing the column material. To avoid this, a pre-cut column is used to separate the contaminating components from those of interest, allowing the desired components to flow into the second or analysis column. The undesirable components are then back flushed or purged into the vent system by an auxiliary carrier supply column switching valves are used in this manner to accomplish the separation desired.
The column-switching valve differs from the sample valve only in function. Figure 7a depicts a single column configuration where only a four-port sample valve is required. When the sample valve is energised, the slider moves one position to the left, placing the sample column in the carrier stream. The carrier gas sweeps the sample through the column, where separation occurs on the way to the detector. In the event that only light components in a sample are of interest the heavy components may be discarded, saving the time required sweeping them through the column.
Figure 7b shows how this is done using a sample valve, a column switching valve, a pre-cut (or stripper) column and an analysis column. After injection of the sample into the striper column, the lighter components are separated from the heavier and move on into the analysis column. At this time, the switching valve is actuated, reversing the flow in the stripper column and purging the heavy components out to vent while the lighter components continue on through the analysis column and to the detector.
By proper application of valves and columns, many separations may be achieved which would otherwise be impossible or perhaps would require much longer time cycles. Figure 8 illustrates the use of l0-port, flexible-diaphragm valves in a configuration using two different carrier gases to perform analysis in two loops, each containing a pre-cut (stripper) column and an analytical column. This switching arrangement permits the use of two different types of columns with separate and different carrier gases for separations which, could not be achieved otherwise.
The working element of a chromatograph is the column, the device that effects the separation of the component or components to be measured. It consists of small diameter tubing packed with a bed of material, which offers varying degrees of resistance to the stream components. Beds are selected that have the ability to retard (retain) some components while passing (eluting) others quickly. Column lengths, as well as material, affect component separation. A schematic of a column is shown in figure 7a.
In gas-solids chromatography the stationary bed is made of fine solids such as silica gel, molecular sieve, charcoal, etc.
The sample in the carrier gas is adsorbed onto one porous surfaces of these solids. In gas-liquid chromatography the stationary bed is made of inert solids such as firebrick, diatomaceous earth, glass beads or other materials on the surface of which a liquid adsorbing agent (substrate) is deposited. In some instances the substrate may even b deposited in the inside wall of extremely small diameter tubing. In this type column the sample in the carrier mixtures adsorbed by the substrate instead of the solids material. The solids, in either case, are packed into 1/8- or 1/4-inch OD tubing, and the column tubing is usually coiled for ease of mounting in the analyser enclosure.
The following are the most important factors affecting the separation efficiency of a column:
1) Bed particle size: smaller granules offer a larger surface area for vapor contact
but pack tighter and require higher carrier pressure. The increased surface area
improves resolution, but this advantage is offset by the tendency of the smaller
particles to channel with flow
2) Liquid substrate: the amount of substrate deposited on the bed affects separation
within limits; no one substrate is suitable for all mixtures.
3) Column length: increasing length increases elution time and requires higher
4) Column temperature: increased temperature increases separation but decreases
resolution (or peak shape).
As the sample and carrier mixture elutes from the column, it passes through a detector where the components are measured in their separated states. By far the most common detectors in use today are the thermal conductivity and hydrogen flame ionisation. These two types are discussed in some detail below. Other types of detectors include the beta ionisation, Electro-capture, ultrasonic whistle, gas density and photoionization.
5.4-1 Thermal Conductivity
The figure 9a depicts thermistor elements in a thermal conductivity detector, and shows the elementary electrical schematic of a Whetstone bridge in which they are used. As a sample component passes the measuring thermistor, the element is heated due to the lower thermal conductivity of the component compared to that of the pure carrier gas. This temperature rise causes a change in thermistor resistance and a subsequent change in bridge balance.
In the hot-wire thermal conductivity detector, wire filaments replace the thermistors as resistance elements and the operation is the same. To achieve higher sensitivity, the standard resistors shown in figure 9a are also replaced with wire filaments, and the detector is modified so that all four elements are in the carrier gas and effluent stream.
5.4-2 Flame Ionisation
The hydrogen flame ionisation detector is most frequently used to monitor components in small concentrations.
Capable of analysing in the range of o to 1 ppm, the detector uses sophisticated circuitry to amplify signals in the range of l0- amperes. Figure 9b depicts a schematic diagram of a commercial detector cell. A DC voltage of 100 to 1,500 volts is connected in series with the electrode. The thermal energy in the hydrogen flame plasma is sufficient to induce emission of electrons from hydrocarbon molecules in the column effluent stream. The ions are collected at the electrodes and amplified in a high quality electrometer-amplifier.
When the unit is properly calibrated, the sensitivity of the detector is proportional to the carbon content of the mixture.
5.5 Analyser Housing
The design of analyser hardware housing is dictated by three requirements:
- a) To comply with electrical codes for hazardous areas,
- b) To provide a temperature controlled enclosure, and
- c) To protect from physical abuse.
Two styles of housing comply with these requirements, the explosion-proof dome and the cabinet housing. The domed housing is a compact assembly, which provides quick access to all hardware components. Temperature control is accomplished with cartridge heaters inserted in the heat sink block around which the column is coiled. Maximum access from three sides makes maintenance and repair easier than on enclosed housings. The cabinet housings use explosion-proof fittings only for the electrical circuitry. Items requiring temperature control are housed separately.
The heater is a ring-fin coil encircling a blower fan. A hot-air blower is used for heat control
5.6 Control Section
The control section of a process chromatograph includes all units of the analyser, which are related to the power supply or the readout and command circuitry. A discussion of these covers the programmer, recorder and other auxiliary devices.
The programmer is the control unit of the process analyser. Mounted in a sheet steel cabinet, it is not explosion-proof but may be air purged for field installations to meet hazardous electrical classification requirements. It contains the timer, power supply for the detector bridge circuit, the automatic zero mechanism, attenuation pots for each component measured and memory amplifier, if required. Its primary function is the amplification and transmittal of the detector signal to the recorder and/or controller.
A function-selector permits the selection of a chromatogram mode, a calibration mode or automatic analysis. Additional functions of the programmer include command of the sample and column valves in the field analyser unit, stream-switching circuitry (although this function is often housed in a separate chassis) and programmed temperature control.
5.6-2 Computer-Based Controls
The use of chromatographic component analysis has increased dramatically in recent years. Emphasis on process efficiency, reduction of -impurities, and minimisation of fuel costs has provided strong economic incentive for on-stream analysis and control. And reservations concerning the abilities of equipment to adequately control the process has given way, in the face of advances in the state of the art of equipment design and application, to a gradual dependence on analytical systems by both operations and management.
This confidence in analytical control has been strongly influenced by the ease with which complex process control equations can now be solved. Computing equipment used a decade ago was custom-built for each application, both hardware and software-wise, from systems originally only a step or two removed from office data processing current analytical measurement and control systems are a blend of common control modes (set point, feed forward, predictive, data acquisition, etc) and powerful data functions (hierarchies, bulk storage, data manipulation, data communication, man-machine interfaces, system diagnostics, etc).
Computer-based has brought the engineer and system designer an enormously flexible tool, and we are face to face with a situation in which we are limited only by our own imaginations. A look at current system design will point out design features now available.
The Optichrom 2100 Process chromatograph System combines proven sensor components with integrated circuit analyser electronics and a microprocessor based programmer. Analog-to-digital conversion is done at the analyser in the field, and data is then transmitted serially over a twisted pair of wires.
Signals are optically isolated at the analyser. The programmer is capable of operating four analysers, with up to 16 streams per analyser and up to 50 components, per stream. Programmer outputs can consist of up to 40 (4-20 ma) continuous signals, also bar graphs, trend, recording, digital printing or computer link. Scaling and data reduction can be accomplished within the programmer, as well as program, which are manually loaded through its keyboard or downloaded from a cassette tape.
- Specific Gravity Measurement
Most major gas flow metering systems require that the metered quantity is presented in Heat Units and in consequence, it is often necessary to make continuous and accurate measurements of Specific Gravity to help achieve this requirement. Specific gravity can be evaluated relating the molecular weight of the gas (or gas mixture) to that of the molecular weight of air, or by evaluating the relative density of the gas (or gas mixture) and compensating the result for the Boyle’s Law deviation on both the gas (or gas mixture) and the air.
The NT 3096 Specific Gravity Transducer adopts a combination of those two methods, whereby measuring the density of the gas under controlled conditions the value of density obtained directly related to the molecular weight of the gas, and thus to its specific gravity.
6.1 Principles of Operation
Theory- Specific Gravity Measurement. By definition:-
Molecular Weight of Gas
Gas Specific Gravity =
Molecular Weight of Standard Air
- e. G = ………………………………1.
Where MA is taken as 28. 96469
Density of Gas
Relative Density =
Density of Air
- e. Pr = …………………………………2
At the same conditions of temperature and pressure.
The relative density is numerically equal to specific gravity witch the supercompressibility factors of both the gas arid tile standard air at the measurement conditions are taken into consideration.
6.2 Functional Description – NT 3096
The 3096 Specific Gravity Transducer consists of a gas reference chamber constructed such that it surrounds a vibrating cylinder gas density transducer, thereby helping to achieve good thermal equilibrium. The gas reference chamber has a fixed volume, which is initially pressurised with the sample gas. Closing the reference chamber valve, thus retaining a fixed quantity of gas now known as the reference gas then seals it.
The sample gas enters the instrument at the base plate and passes through a
filter, followed by a pressure reducing orifice. The sample gas is their fed through a
spiral heat. Exchanger (wound round the reference chamber) so that it enters the gas
density, transducer at the equilibrium temperature. The gas then flows down to a
pressure control valve chamber.
The reference gas pressure acts through a separator diaphragm on the pressure
control valve chamber so that the gas pressures on both sides of the diaphragm are
equal, i.e. the gas pressures within the gas density transducer and the reference
chamber are equal.
As the ambient temperature changes, the pressure of tile fixed volume of
reference gas will change as defined by the Gas Laws. This change in pressure will
affect the sample gas pressure within the gas density transducer such that tile
temperature and pressure changes are self-compensatory
If the sample gas pressure rises above that of the reference chamber pressure,
the pressure) control valve opens to vent the excess gas via tile outlet connection in
the base plate. In this manner the sample gas is made equal to tile reference gas
pressure. For gas to flow it is necessary that the supply pressure is greater than tile
reference pressure which in turn must be greater than the vent pressure.
A pressure gauge is fitted so that tile gas pressure within the gas density
transducer call be monitored. This is desirable when charging the reference chamber
arid for general maintenance use.
Electrical connections to the NT 3096 Transducer are taken to a terminal box
located on the bottom surface of tile base plate.
A thermal insulation cover is placed over the complete instrument so that rapid
changes in ambient temperature will not upset the temperature equilibrium of the
6.3 Transducer Sensing Element.
The gas density transducer consists of a thin metal
cylinder which is activated so that it vibrates in hoop mode at its natural frequency.
The gas is passed over the inner and outer surfaces of tile cylinder and is thus in
contact with the vibrating walls. The mass of gas which vibrates with the cylinder
depends upon the gas density and, since increasing the vibrating mass decreases the
natural frequency of vibration, the gas density fur ally particular frequency of
vibration can be determined.
A solid state amplifier, magnetically coupled to the sensing element, maintains
the conditions of vibration and also provides the output signal. The amplifier and
signal output circuits are encapsulated in epoxy resin
The instrument is supplied its reference chamber empty and thus in
an uncelebrated condition. After installation on site it is necessary to charge and
calibrate the instrument as follows.
6.4-1 Selection of Reference Chamber Pressure.
The first requirement is the selection of
the most appropriate reference chamber pressure after considering the type of gas,
range of specific gravity to be measured and the accuracy desired. It is essential to
ensure a good gas flow through the instrument and for this it is recommended that
the selected reference chamber pressure is at least 10% below the minimum gas
pipeline pressure. In cases of low gas pipeline pressure pump may be fitted to boost
the sample gas pressure to achieve the required flow rate.
6.4-2 Calibration Gases.
The calibration gases to be used must be of known specific
Gravity’s and substantially represent the properties of the line gas to be measured.
For example, if measuring a natural gas which is substantially methane and carbon
dioxide, then these two gases in their pure forms or at defined specific should be used
in the calibration This is necessary in order to account for any compressibility
characteristics of the component gases.
6.4-3 Calibration Method
It is recommended that the system should be purged as
detailed under Post Installation Checks before any calibration is attempted. This
ensures that the reference gas to be used represents the mean quality of the line gas
to be measured, as far as supercompressibility is concerned, and is pressurised to the
Various formats are included which cater for the individual transducer calibration
and also for the envisaged transducer/readout equipment systems each format is
capable of reproduction for use by the customer.
Having selected the two most suitable calibration gases and the appropriate format
for recording the calibration data, proceed as follows:-
Note:- The insulating cover should be placed over the instrument at least one hour before calibration is attempted to ensure temperature equilibrium. Valve C should be locked in the OPEN position and electrical power applied to the instrument.
1) Ensure that the isolation valve and valve A are turned OFF.
2) By means of valve B, pass calibration gas X through the instrument at a flow rate less then that necessary to cause control valve resonance. Record the periodic time T5 of the density transducer frequency sigal after the reading has stabilised.
3) Repeat operation 2 using calibration gas Y, recording the time period Ty of the density transducer frequency signal after the reading has stabilised.
4) From the results so far obtained, the periodic times for the minimum and maximum specific gravity spans can be calculated, using the format
6.4 Servicing Procedures
The recommended servicing and maintenance, which can be carried out under, field conditions. Involved are calibration checks, fault finding
procedures arid simple maintenance. Should a fault be traced to a reference chamber- malfunction it is strongly recommended that the repair of the faulty unit be restricted to a qualified Engineer or that the faulty unit be returned to Solartron where comprehensive facilities for repair exist.
f a calibration check reveals a significant error, the cause of this error (e.g.
reference chamber leak, deposition on the vibrating cylinder) should be thoroughly
investigated before any, re-calibration attempt is made.
6.5 Check Calibration.
It is normally good practice to carry out periodic checks on
tile system accuracy. This is simply achieved by passing a gas of known specific
gravity through the instrument as previously detailed under Calibration. It is
preferable flint the specific gravity of this calibration gas lies within the specific
gravity span of’ the system under test since this will simplify the system check
procedure. The Specific Gravity Transducer can be checked using a gas whose
specific gravity is outside this range provided that the associated electronic system is
capable of making the necessary time period measurement, and the gas
characteristics are similar to those of the system line gas.
6.6 Fault Finding Procedure.
Adverse results from a Check Calibration, or suspected
system readings can generally be categorised into three groups.
- a) Instrument Over-reads. This is generally due to deposition, condensation or corrosion off the vibrating cylinder walls. Remedial action consists of removing tile density transducer and cleaning the sensing element assembly, as de tailed later. If corroded, or iii any way damaged, the sensing element should be replaced with a new item.
- b) Instrument Under-reads. This is most probably due to a gas leak from the reference chamber. Before dismantling the instrument it is desirable to locate the leak which may be classified as follows: –
- Reference Chamber to Sample Gas Path
- Pressure Control Valve Malfunction
If the erratic signal is only present while there is a flow of sample gas than it is likely due to a malfunction of the pressure control valve, brought about by the presence of dirt. In this case the valve mechanism should be stripped down, cleaned and re-assembled by a qualified engineer. Any poor seals or damaged parts should be replaced.
6.7 Electronic Faults.
Transducer electronic faults can be verified by applying a few
- Spool-body Assembly. The magnetic drive and pick-up assembly (spool-body) can be checked visually and for electrical continuity, by measuring the resistance of each coil. The resistance of the pick-up coil should be 40 ohms and that of the drive coil ohms.
Fig. 2.9 is a schematic block diagram of the transducer vibration sustaining circuit, showing the plug/socket and terminal identification.
- Electronic Amplifier. If careful examination of the sensing element and spool-body assembly does not reveal the cause of the problem, the amplifier assembly should be replaced by new assembly of a similar type. This will indicate whether the problem is within the old amplifier. Alternatively, the amplifier can be plugged into a good spool-body/sensing element assembly.
If this procedure is not practical, some indication of a fault may be obtained by checking the current consumption at specified supply voltages as described earlier.
One further test involves changing the supply voltage from 15 to 25 volts and checking that the output frequency does not change.
Apart from scheduled calibration checks and filter replacements, the frequency of the latter being dependent upon the condition of the sample gas, no other routine maintenance should be required.
When a fault is suspected, a competent engineer can identify which part of the system is likely to be defective and take the necessary remedial action. The depth of dismantling required to service the faulty component left to the discretion of the engineer, but to cover all eventualities, a full dismantling procedure to component is detailed. The maintenance can be broken down into four stages:-
- Density Transducer Removal: This is accomplished on site, leaving the main transducer assembly installed installed whilst transporting the density transducer to a clean environment for further servicing.
- Main Transducer (NT3096) Removal: This is necessary for servicing of the pressure control valve and/or the reference chamber diaphragm, both to be carried out in a clean environment.
- Pressure Control Valve Removal, after stage (b).
- Reference Chamber Diaphragm Removal, after stage (b
- Gas Density Transducers
7.1 Principle of operation
The transducer-sensing element consists of a thin metal cylinder, which is an activated so that it vibrates in a hoop mode at its natural frequency. The gas is passed over the inner and outer surfaces of the cylinder and is thus in contact with the vibrating walls. The mass of gas which, vibrates with the cylinder depends upon the gas density and, since increasing the vibrating mass decreases the natural frequency of vibration, the gas density for any particular frequency of vibration can be determined.
A slid state amplifier, magnetically coupled to the sensing element, maintains the conditions of vibration and also provides the output signal. The amplifier and signal output circuits are encapsulated in epoxy resin.
7.1-1 Orifice plate metering, Density measurement using a density transducer is now a well established and preferred method for orifice metering which, in major metering stations, substantially replaces the density calculation method employing pressure and temperature measurements
In order to establish the volumetric flow or mass flow through an orifice plate, it is necessary to know the density of the fluid at the orifice and the differential pressure across the orifice where
Fig. 2.1 shows an arrangement where the differential pressure due to the downstream pressure recovery causes the flow of gas through the density transducer. Whilst this pressure recovery method is normally recommended, alternative arrangements may be preferred. It is however necessary to ensure that the derivation of the Expansion Coefficient for the Orifice flow calculations is correct for the tapping point at which the density is being measured.
7.1-2 Volumetric Flow Meters, Positive displacement meters or turbine flow meters can be converted to mass flow meters using the Solartron density transducer and a simple readout system Sine both the flow meter and the density sensor signals are in frequency form, the readout system need use only digital techniques, see Fig. 2.2
The combined accuracy’s of the density measurement and a digital readout are considerably higher than that of volumetric flow meters so that the overall accuracy of mass flow measurement will be almost entirely determined by the accuracy of the volumetric flow meter.
Calibration of the transducers is carried out prior to dispatch to ascertain the density frequency/time period characteristic of each instrument.
Also included is a performance check involving density/temperature ranging. After installation, and periodically thereafter, check calibrations should carried out to verify the factory calibration The density/periodic time relationship follows a well defined law:
P = K0 + K1.T + K2.T2
Where P = Indicated density
T = Transducer constants
K0, K1 & K2 = Transducer constants
It is necessary to specify the individual transducer constants to achieve optimum accuracy. The variation between transducers of the same range and type is typically about 10% of the nominal density range.
Detailed calibration method included in the course attachment documents.
7.3 Transducer Maintenance
In order to maintain the integrity of the transducer, periodic check calibrations are recommend. If during such a check a significant error is discovered, the cause of this error should be thoroughly investigated before a re-calibration is attempted, e.g. a damaged cylinder can cause unreliable measurement.
Checks for deposition, corrosion and condensation on the sensing element, the state of the in-line filters and some electrical checks on the amplifier/spool-body circuits is all the maintenance required (Attached documents)
7.4 Post Maintenance Tests
It is not necessary to carry out a full calibration on a transducer, which has undergone a full servicing, However, it is recommended that a check calibration be carried out to assure correct performance. Should such a check uncover a significant calibration offset it is recommended that a replacement cylinder is fitted and a full calibration is carried out or the transducer is returned to Solartron for further defect analysis. If during a servicing a cylinder has been changed, it is essential that a check calibration be carried out using the new cylinder’s certificate figures, However, it is normally recommended that a full calibration be carried out.
7.5 Fault Finding
The most likely cause of malfunction is the presence of dirt or condensate on the sensing element. A visual check on the condition of both cylinder and spool-body will eradicate this source.
Disorientation of spool-body/cylinder and the fitting of the wrong cylinder, especially after a servicing, must not be ruled out, Great care is essential in this direction.
Lastly amplifier malfunction is a likely cause, this can be easily proved by the fitting of a known serviceable amplifier to the transducer or checking the suspect amplifier in a known serviceable transducer system.
- Technical Tasks
Participants to perform the following tasks using the attached documents and with consultancy of the OJT Instructors
8.1 Oxygen Analyser
- Demonstrate analyser calibration with air.
- Perform sensor routine maintenance.
- Perform analyser service.
8.2 Net Oil Analyser
- Perform analyser calibration.
- Perform analyser service.
8.3 Optical Intensity Analyser
- Perform Zero and span calibration.
- Perform preventive maintenance.
- Describe how to troubleshoot analyser system.
- Perform corrective maintenance.
8.4 Turbidity Analyser.
- Demonstrate sensor calibration.
- Check limit contractor function.
- Perform system troubleshooting.
8.5 Dew Point Analyser
- Demonstrate field calibration using Dew Comp/. “ MCY 40”.
- Perform system troubleshooting.
8.6 H2S Gas Detection System
- Demonstrate removal and re-installing of the sensor element.
- Perform sensor calibration using calibration gas bottle.
- Perform sensor servicing and inspection.
- Check the calibration of system controller.
- Perform controller maintenance.
- Troubleshoot system problems.
8.7 Gas Chromatography
- Explain how to use a service panel.
- Describe how to start the chromatograph.
- Describe how to perform scheduled maintenance.
- Describe how to resolve alarm problems.
- Demonstrate how to shutdown the chromatograph.
8.8 Moisture Analyser.
- Perform periodical calibration verification.
- Perform scheduled maintenance.
- Demonstrate troubleshooting maintenance.
- Check or replace central station faulty cards.
- Check function of field unit cards.
8.9 Chlorine Analyser
- Perform sensor maintenance.
- Perform sensor troubleshooting.
- Demonstrate micro processor configuration.
- Perform system calibration.
- Demonstrate keyboard security code setting.
- Trouble shoot micro processor problems.
8.10 Specific Gravity Measurement
- Demonstrate principles of operation:
(a) Functional Description. (b) Schematic diagrams.
- Demonstrate calibration of specific gravity transducer.
- Perform recommended servicing which can be carried out under field
(a) Check calibration. (b) Fault finding.
(C)Electronic faults verification.
- Perform routine maintenance for SP. Gr. Transducer:
(a) Filter replacement (b) Transducer removal
(C) Press. Contr. Vale removal (d) Leak testing.
(e) Reference chamber diaphragm removal.
(f) Sensing element servicing.