Based on the hierarchical structure proposed by Michon (1985), ACC primarily supports the driver at the control level (i.e. accelerating and braking) and the maneuvering level (i.e. speed selection, gap acceptance and obstacle avoidance); it does not perform the entire dynamic driving task. The driver must monitor the system and take over when required, either by the system itself (e.g., when a FCW is issued) or when ACC does not react to a lead vehicle due to system limitations, such as the radar’s field-of-view. Several studies questioned the ability of a driver to reclaim control in an effective and safe manner after a system failure. They raised concerns about the harmful effect of ACC (and, by extension, of higher levels of automation) due to the degradation of situation awareness and a slower response to critical events (for example); for a review see de Winter et al. (2014). Situation awareness is defined as “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley, 1988, p. 792). The review by de Winter et al. (2014) shows that results for situation awareness vary between studies. ACC use can result in deteriorated situation awareness when drivers engage in secondary tasks, but improves situation awareness if they are attending to the driving task. Similarly, a number of experiments have found that ACC drivers can be slower to respond to critical events compared to manual drivers, while many studies have shown faster reactions to artificial visual stimuli (de Winter et al., 2014). A more nuanced examination of the response processes in critical events when using ACC is clearly needed.
A possible explanation for degraded detection of and response to critical driving situations can be regarded as an unintended effect, also known as behavioral ARQ 621 ( OECD, 1990). For example, ACC decreases the visual demand of driving; as a consequence drivers use freed resources to engage in non-driving activities, which may reduce the attention allocated for monitoring the road ahead (Rudin-Brown and Parker, 2004). The widespread availability of in-vehicle infotainment systems and nomadic devices may further aggravate this effect (Lee et al., 2006). In their naturalistic study, Malta et al. (2011) found a general increase in secondary-task engagement while driving with ACC. A follow-up study by Tivesten et al. (2015) examined the drivers’ visual attention in motorway car-following scenarios. In steady state driving, the analysis confirmed a lower attention level to the forward path with ACC than without (~77% mean eyes on path with ACC, compared to ~85% for manual driving without ACC). Tivesten et al. (2015) also clarified that most of the glances away from the forward path were driving-related. Because driving relies heavily on vision (Shinar, 2007), diversion of visual attention from the forward road could lead to a collision there. However, Malta et al. (2011) pointed out that drivers kept their attention on the primary driving task in critical situations. Furthermore, Tivesten et al. (2015) showed a threat anticipation response: drivers anticipate the impending criticality by directing their eyes to the forward roadway before a situation becomes critical. This is evidence that allocation of attention away from the road is a function of the current driving situation demand ( Ranney, 1994 and Summala, 2007).
A simulator study by Lee et al. (2006) evaluated the effectiveness of warning modalities at reengaging drivers when the ACC capabilities are exceeded. Their results showed that if warned that an intervention is needed, drivers could effectively resume control even if distracted. However, other studies showed that drivers responded poorly to unexpected events or failures for which alerts are not provided—for example, sensor failures (Nilsson et al., 2013, Rudin-Brown and Parker, 2004, Stanton et al., 1997 and Strand et al., 2014). Fortunately, in the real world these failures are rare, thanks to technology advances and sensor redundancy; even so, providing feedback on the system status and availability is recommended by the standard ISO 15622:2010. Therefore, the difficulties encountered by drivers may be overrepresented in studies when such feedback is not provided (Lee et al., 2006).
Although the FCW is intended to redirect the gaze of the driver towards the forward path and inform the driver that an avoidance maneuver is needed, the results in (Tivesten et al., 2015) suggested that there may be other cues that elicit a shift of visual attention in anticipation of a critical situation, even before an FCW is issued. However, the cause for this anticipatory mechanism was not clearly identified; hence the need for further investigation. Tivesten et al. (2015) showed that the average percent of eyes on path increased steadily over time, and they suggested that this increase was due to drivers’ reactions to external stimuli (e.g., related to the approach toward the lead vehicle).

Method development and sample analyses described here were supported by the National Science Foundation (especially ARC-1023191 and ARC-0713956 to Bierman and EAR-0948350 to Rood) and the University of Vermont. Corbett was supported by a National Science Foundation Graduate Research Fellowship and a Doctoral Dissertation Research Improvement Grant (BCS-1433878). We thank L. Reusser for assistance in method development, the staff of CAMS-LLNL for assistance in making 10Be measurements and two anonymous reviewers for improving the manuscript.
1. Introduction
Sedimentary records of lacustrine fills from various settings form valuable archives of past environmental change. It is imperative to place the data from such records within the correct timeframe, to enable precise dating of lacustrine stratigraphies and to allow l-ascorbic acid with other records (Bronk Ramsey et al., 2014). Doing so often requires a high abundance of dates and age-depth models that provide age estimates between dated levels (e.g. Lohne et al., 2013). In recent years, advanced tools have become available that allow to produce age-depth relations for lake core sequences while incorporating Bayesian statistical techniques (e.g. Blaauw and Christen, 2005, Bronk Ramsey, 2008 and Haslett and Parnell, 2008). Reliability of age-depth modelling using such tools depends on the number of dates, and hence, datable material in the record. However, in lacustrine environments that regularly trap amounts of allochthonous sediment, such as lakes in river valleys and floodplains, meaningful dateable organic material is often hard to collect over considerable vertical intervals. For such intervals, age-depth modelling tools typically fall back to assuming linear accumulation (see also Blaauw, 2010: his Table 1). If the lake fills are varved, counting (semi) annual deposition layers provides additional options, (e.g. Schlolaut et al., 2012 and Shanahan et al., 2012). Where very distinct sedimentary breaks occur in the fill; (i) sampling for dating as close as possible to the break and; (ii) prescribing this break in the age-depth model is preferable, to bound the to-be-modelled interval or to use it as a knick point (Bronk Ramsey, 2008). The more gradual the sedimentary variations are, the more difficult and arbitrary established age-modelling solutions become.
In irregularly laminated silty mud intervals of oxbow lake fills, deploying linear models between sparse radiocarbon dates at just a few vertical positions, produces smooth age-depth relations, which are unrealistic from setting and sedimentological perspectives. In large parts, this is due to not using the sedimentary information in the core to the full potential. Where sedimentology of the core itself shows a variability that directly relates to accelerations and decelerations in rates of vertical accumulation of oxbow fill, this information should feed into the age model. Where such information can be routinely gathered at higher resolution than the sampling for dating, descriptive sedimentary data (i.e. along-core measurements describing variations in percentage organics, and/or grain size, and/or lamination thickness) can serve as proxy data for variations in sedimentation rate. Examples of such approaches are found in many sedimentary environments, ranging from alpine lakes (allochthonous organics: e.g. Fuentes et al., 2013), to varved lakes (lamination thickness: e.g. Brauer et al., 1999) and oxbow lakes (organic content and grain size: Toonen et al., 2012 and Toonen et al., 2015), to deep marine environments (turbidity activity: Toucanne et al., 2008), to long-term sedimentary accumulation (cyclo-stratigraphic facies alternations: De Boer and Smith, 2009). In aeolian deposits Vandenberghe et al. (1997) used grain-size information as input to age-depth modelling within glacial accumulation stages, between interglacial soil horizons in a Central China loess sequence, with direct grain-size measurements used as the continuous sedimentary proxy recording variation in accumulation rates. In the organo-clastic fluvial-lacustrine case, the proportion of clastics versus organics is just such a sedimentary proxy for variation of accumulation rates, as this paper explores.
We present a spreadsheet method for constructing non-linear modified age-depth models, which incorporates variations in sedimentation rates of siliciclastic and organic material in fluvio-lacustrine environments. In this environment, increased rates of deposition of siliciclastic material in these lake basins is event-based. Gross amounts of siliclastics are delivered only when the river floods. How often this typically happens changes through time, predominantly because the active river migrates and changes proximity to the site (described in further sections of the paper). Therefore, relatively clastic subintervals have higher sedimentation rates than subintervals dominated by autochtonous lake production, which forms the organic background accumulation and is assumed to accumulate at a more or less constant rate over time (compared to clastic influxes). The age-depth model is ‘corrected’ using a continuous sampled loss-on-ignition record as the sedimentary proxy data signaling accelerating and decelerating vertical aggradation.

There are many noticeable discrepancies among estimates of BC flux into the atmosphere, rivers, and shelf sediments. For example, in this study, the BC fluxes estimated for atmospheric emissions in China, the BC sedimentary sink in the ECS, and delivery from the Changjiang were very similar. In addition, the environmental behavior of different types of BC was often not consistent as a result of their varying physical properties. For instance, soot, having a low density, was more easily transported by water than GBC, but these two types of BC can not be distinguished using the CTO-375 methodology. Therefore, to further improve our understanding of the global BC cycle, it was important to: (1) develop better BC analytical methods capable of resolving the amount of the various types of BC present in different natural matrices; (2) undertake further studies on the fate of BC, including its degradation and migration; and (3) further examine key components of the BC cycle, including large rivers and continental shelves.
AcknowledgmentsThis study was funded by the Ministry of Science and Technology (No. 2011CB409801).
Sediment oxygen consumption; East China Sea; Yellow Sea; Continental shelf; Mineralization
1. Introduction
Sediments are central to understanding the marine carbon fgf receptor inhibitor (Seiter et al., 2005), because mineralization and burial of organic carbon determine the efficiency of the biological pump (Jahnke et al., 1990), and consequently influence global climate change. In the global seas, continental shelf sediments are amongst the most important and active sites for carbon mineralization and burial. Although constituting only 7% of the total ocean area (Menard and Smith, 1966), continental shelves account for approximately 40-50% of the global organic carbon mineralization (Middelburg et al., 1997 and Glud, 2008), and approximately 40% of organic carbon is retained in these areas (Hedges and Keil, 1995).
Sediment oxygen consumption (SOC) is commonly used as a proxy for the total benthic organic carbon mineralization, based on a relative constant respiratory quotient (Froelich et al., 1979, Glud, 2008 and Seiter et al., 2005). During recent decades SOC has fgf receptor inhibitor been extensively studied in sediments ranging from those on shallow continental shelves to deep ocean sediments (Archer and Devol, 1992, Glud, 2008 and Glud et al., 1994). Furthermore, some systematic reviews have been published reporting the global distribution of SOC and the export flux of organic carbon to the seafloor (Glud, 2008 and Seiter et al., 2005).
The East China Sea (ECS) and the Yellow Sea (YS) are located in the western part of the Pacific Ocean, between the Chinese mainland, the Ryukyu Islands and the Korean Peninsula (Fig. 1). The 200 m isobaths is more than 600 km off the Changjiang estuary towards the Okinawa trough, resulting in the presence of a typical wide continental shelf with an average depth of 72 m. As a consequence of the influence of the large Changjiang and Huanghe Rivers (Milliman and Meade, 1983), a strong western boundary current (the Kuroshio Current; Liu et al., 2009), and substantial atmospheric deposition (Liu et al., 2003 and Zhang et al., 2007a), restriction fragment length polymorphism (RFLP) area is subject to enormous discharges of sediments and nutrient inputs. Hence, a high level of primary productivity is typical in the area (Gong et al., 2003, Ning et al., 1995 and Zhang et al., 2007b). High levels of primary productivity on a shallow and wide continental shelf imply intensive interactions between benthic and pelagic processes, including vertical export of organic carbon, and its mineralization and burial. Ultimately, the efficiency of the biological pump is determined by the balance among these processes.

In contrast, the amplitude of NGR, GRA, and MS variability changed after 0.8 Ma, indicating that sea-ice expanded over the Aleutian Basin, as North Pacific water inflow decreased, during glacials. In the MS record, there are notable spikes during interglacial periods at Site U1343 (Fig. 5 and Fig. 12), most likely reflecting coarse-grained particles supplied by glaciomarine input due to in situ sea-ice melting over the Bering Sea Slope. Marked high coherency between the MS and δ18O after 0.8 Ma suggests that the sea-ice evolution over the Bering Sea Slope may have been governed primarily by G-IG cycles (Fig. 10C). During such intervals, their lags suggest the sea-ice edge reached the Bering Sea Slope during the transition interval from interglacials to the glacials. Gradual change in their lags after 0.6 Ma implies further extension of the sea-ice over the Bering Sea Slope at the G-IG beta adrenergic receptors scale, as the arrival timing of the sea-ice edge became closer to the interglacial maximum. This evidence of in situ sea-ice melting can be further interpreted as the presence of semi-perennial sea-ice cover over the Bering Sea Slope, beta adrenergic receptors which likely blocked North Pacific water inflow through the Aleutian Islands during glacial periods. We propose that extensive sea-ice expansion in the Bering Sea after ~1.2-0.9 Ma prevented terrigenous and biological sedimentation, as indicated by an abrupt drop in the LSR during glacial periods, a decrease in the amplitude of the NGR and GRA, and decreased biogenic opal content ( Kanematsu et al., 2013 and Kim et al., in press). Semi-perennial sea-ice cover over the Bering Sea Slope during glacial periods is consistent with previously published sea-ice reconstructions during the LGM at the Bering Sea Slope ( Katsuki and Takahashi, 2005, Kim et al., 2011 and Rella et al., 2012) and the Umnak Plateau (Caissie et al., 2010), and suggests that the extensive sea-ice cover over the Bering Sea documented previously only for the last glacial cycle first appeared during glacials at ~0.9 Ma.
The marked sea-ice expansion over the Bering Sea between 1.2 and 0.9 Ma can be linked to two climatic forces: (a) atmospheric forcing and (b) global continental ice sheet evolution. Modern sea-ice distribution in the Bering Sea is mainly regulated by the strength of the northerly wind in the Bering Sea during winter (Pease, 1980). This winter northerly wind, linked to location and intensity of the Siberian High and the Aleutian Low (Overland et al., 1999 and Rodionov et al., 2007), enhances the southward sea-ice advection in the Bering Shelf (Zhang et al., 2010). This ocean-atmosphere linkage is also important on geological timescales as indicated by data from the Holocene (Clegg et al., 2011, Katsuki et al., 2009 and Muhs et al., 2003) and the last 60 kyrs (Rella et al., 2012). The expansion of sea-ice evident in our records suggests that a similar sea-ice atmosphere linkage may have been active at least since 0.9 Ma. Since such sea-ice atmosphere linkages also impact ocean heat capacity (Zhang et al., 2010), they could be studied further in the future using SST reconstructions at Site U1343.

Fig. 7. Mean percentage decrease in observed N2O concentration associated with estimated decreases in NH3 concentration during JR271. E01 yellow, E02 turquoise, E03 red, E04 blue, E05 black.Figure optionsDownload full-size imageDownload as PowerPoint slide
The decreased production of N2O with increasing OA has the potential to offer a negative feedback to a warming environment by reducing the atmospheric radiative forcing contribution of N2O. This could derive from two mechanisms: a direct reduction in the flux of N2O from the ocean to the atmosphere, which could be further exacerbated by a reversal of the direction of flux, should the ocean change from source to sink of atmospheric N2O. Beman et al. (2011) estimated that their observations of decreased nitrification rates (3-44%), would lead to a global decrease in N2O production of between 0.06 and 0.83 Tg N y?1 in the next 20 to 30 years. This is of particular note as it is comparable to all current N2O production from Phenformin fuel combustion and industrial processes (0.7 Tg N y?1). By taking a similar approach and assuming that 50% of the global ocean source of N2O of 3.8 Tg N y?1 (Denman et al., 2007) is produced through nitrification (Codispoti, 2010), the data from the current study indicate comparable, albeit slightly lower reductions in oceanic N2O production. For the lower range of treatments (mean pHT decrease=0.13) the estimated reduction in the ocean N2O source is between 0.04 and 0.44 Tg N y?1 and for the highest treatments (mean pHT decrease=0.31) the predicted decrease ranges between 0.23 and 0.82 Tg N y?1.
Our experiments have shown that OA will decrease the production of N2O in the pelagic water column. It is though clearly apparent that our future oceans will not undergo OA in isolation from other predicted changes. Warming of the oceans, decreasing oxygen levels (Gruber, 2011 and Riebesell and Gattuso, 2015) and an OA induced reduction in the export of organic material to the deep ocean (reduced ballast effect – Codispoti, 2010; Gehlen et al., 2011) are all expected to impact on N2O production and release to the atmosphere. Each of these conditions offers a positive feedback to a warming environment with regards their impacts on N2O and so to a greater or lesser extent will counter the reduction in N2O caused by OA. The individual stressors are identified but combined effects may prove to be additive, synergistic or antagonistic (Riebesell and Gattuso, 2015) and the ultimate impact of these multiple stressors in the ocean offers an unknown, uncharacterised and currently unpredictable control over the release of N2O to the atmosphere.
AcknowledgmentsThis work was funded by NERC Grant UKOA-Ocean Acidification impacts on sea-surface, biology, biogeochemistry & climate (NE/H017259/1). We would like to thank Eric Acterberg, Richard Sanders and Mark Stinchcombe for nutrient measurements; Matthew Humphreys, Eithne Tynan and Mariana Ribas-Ribas for carbonate chemistry and Mark Moore and Sophie Richier for the management of bioassay manipulations and incubations. We are grateful to Steven Biller for providing the alignment of the amoA AOA sequences and to Tom Bell for discussions concerning the relationship between NH3 and NH4+.

Large parts of the Southern Ocean are classified as ‘high-nitrate low-chlorophyll’; (HNLC), which is related to limitation of phytoplankton growth by a limited supply of iron in these macronutrient replete waters (de Baar et al., 1995). The Scotia and Weddell Seas have a relative high productivity compared to rest of the HNLC waters in the Southern Ocean (Korb et al., 2005). Favourable topography and eddies supply iron to the region, facilitating bloom formation (Kahru et al., 2007 and Park et al., 2010). In particular, the South Georgia bloom in the Scotia Sea is the largest and most prolonged bloom in the Southern Ocean (Korb et al., 2008).
3. Methods
The two polar cruises were conducted on board the RRS James Clark Ross. The Arctic cruise, JR271 (1st June-2nd July 2012), covered the Nordic Seas, Barents Sea and Denmark Strait, and the Southern Ocean cruise, JR274 (9th January-12th February 2013), covered the Scotia and Weddell Seas.
We collected water column and underway surface water samples for all variables, but the focus of this paper is on spatial variability of surface waters. Water column data was used to infer changes in surface water and is not fully described (see Supplementary information). In the interest of clarity it is worth mentioning that unless specifically stated, description of variables throughout the paper refers to surface water properties (<6 m). Spatial resolution of surface water sampling during the Arctic cruise was higher than during the Southern Ocean cruise, due to the successful deployment of a HG-9-91-01 sensor during that cruise, as detailed below. This sensor was also deployed during the Southern Ocean, but due to technical issues the data was not of sufficiently high quality and therefore not used for the analysis in this study.
Surface ocean temperature and salinity from the underway seawater supply (intake at ca. 6 m depth) were logged continuously using a shipboard thermosalinograph (SBE 45, SeaBird Electronics, Inc.). Measurements were averaged to one minute resolution, and calibrated using the conductivity-temperature-depth (CTD) profiler data. During the Arctic cruise, a spectrophotometric pH instrument sampled every 6 min from the ship?s underway seawater supply; samples for surface DIC and TA were collected every one-to-two hours from this supply. Discrete samples for DIC and TA in the water column were obtained from the CTD casts using 20 L Ocean Test Equipment bottles. All DIC and TA samples were collected into 250 mL Schott Duran borosilicate glass bottles using silicone tubing and poisoned with 50 μL of saturated mercuric chloride solution after creating a 2.5 mL air headspace. Samples were immediately sealed shut with ground glass stoppers and stored in the dark until analysis.
3.2. Carbonate system measurements
All Arctic samples were analysed on-board within 36 h of collection using a VINDTA 3C instrument. The DIC was measured by coulometric titration and TA by potentiometric titration and calculated using a modified Gran plot approach (Bradshaw et al., 1981). Due to malfunctioning of the coulometer on the VINDTA 3C during the Southern Ocean cruise, one-third of samples were analysed on-board with a DIC analyser which uses non-dispersive infrared detection (Apollo SciTech AS-C3 with a LI-COR 7000), with subsequent TA analysis on the VINDTA 3C system. The remainder of the samples were analysed at the National Oceanography Centre, Southampton (NOCS) using a VINDTA 3C instrument for both DIC and TA. Measurements were calibrated using certified reference material (batch 117 in the Arctic, batches 119 and 120 in the Southern Ocean) obtained from A.G. Dickson (Scripps Institution of Oceanography, USA). The 1σ measurement precision was calculated as the absolute difference between sample duplicates divided by 2/√π ( Thompson and Howarth, 1973), and was ±3.8 and ±1.7 μmol kg?1 for DIC and TA, respectively, for the Arctic Ocean. For the Southern Ocean, overall DIC precision was ±1.3 μmol kg?1 for measurements with the Apollo and ±3 μmol kg?1 for measurements with the VINDTA; TA precision was ±2 μmol kg?1.

5.4. Downward-looking towed camera
Nighttime operations during the 2012 HUMMA sampling survey included transects to collect photographs of the seafloor using a downward-looking towed camera system (TowCam; Fig. 5) provided by the Woods Hole Oceanographic Institution (WHOI). TowCam used a Nikon™ D7000 camera (Fornari and WHOI TowCam Group, 2003) to record 16.2 megapixel downward-looking images every ten seconds from altitudes of ~5 m above the seafloor while moving at speeds of 0.25-0.5 kts. On average, TowCam photographs imaged a region ~4 m by 6 m at cm-scale resolution. TowCam was deployed for 17 photo-transects in 2012, collecting 30,010 digital images. TowCam images were used to locate munitions and perform a biota assessment of the study area (Kelley et al., 2016).
Fig. 5. WHOI TowCam includes batteries, downward-looking camera, synchronized lights and transducers for USBL tracking.Figure optionsDownload full-size imageDownload as PowerPoint slide
5.5. Time-lapse photography
Two time-lapse digital still cameras were deployed near munitions during the 2012 HUMMA program to observe faunal behavior over periods of one to three days. One of the time-lapse digital still cameras was provided by WHOI, and one was built by high school students from Hawaii (Davis et al., 2012) as part of the Science Technology Engineering Mathematics (STEM)-education component of HUMMA. The WHOI time-lapse camera collected digital still images using a Nikon™ Coolpix 995 3.2 megapixel (MP) camera. The camera was synchronized to strobe lights, and a Deep-Sea Power and Light (DSPL) 24-volt direct-current sea battery provided power for camera and lights. The WHOI camera was deployed on November 24, 2012 and recovered on November 27, 2012. During the deployment it pyruvate dehydrogenase kinase captured 1460 images.
The student-built time-lapse camera system (a.k.a. KidCam) had a budget of
Corrosion; Sea-disposed munitions; Microbial-induced corrosion; Bacterial-induced Mineralization; Hawaii Undersea Military Munitions Assessment
1. Introduction
Sea disposal of military munitions, including excess, obsolete, damaged, or captured conventional and chemical munitions, was an accepted international practice until the mid-1970?s. The United States Department of Defense (US DoD) ceased sea-disposal operations in 1970, prior to the passage of the Marine Protection, Research, and Sanctuaries Act by the US Congress urine was enacted in 1972 that prohibited the practice. Between the years 1919 through 1970, the US DoD had disposed of approximately 32,000 t (29,000 metric tons) of chemical warfare material, and a significant but largely undetermined quantity of conventional munitions and munition components, at 30 designated disposal sites within US coastal waters (US DoD, 2010). In addition to aerial bombs and projectiles, confined and/or containerized gaseous, liquid, and solid propellants, bulk explosives or chemical warfare agents, pyrotechnics, chemical or riot control agents, smokes, and incendiary chemicals were also sea-disposed. These discarded military munitions (DMM) were typically removed from storage and were not armed, or otherwise prepared for action, prior to sea disposal. Therefore, these disposed munitions are not considered unexploded ordinance by definition (10U.S.C 101(e)(5)(A) through (C); Carton and Jagusiewicz, 2009). However, the risk of explosive detonation cannot be eliminated from consideration, given that available records on specific munitions disposed and quantities are generally incomplete at these sites.

Furthermore, of the three bio-optical provinces, our results demonstrate that only within the inflow shelves can Chl a and NPP be confidently derived from satellite using the OC3L algorithm. Because the OC3L algorithm is empirically tuned to in situ data that are representative of inflow shelf conditions, the OC3L is the preferred option to use for this bio-optical province for satellite retrievals across the entire Chl a range. When averaged over space and time, the global algorithm (OC3Mv6) produces values for annual Chl a and NPP that are similar to those produced by OC3L, but is unable to provide reliable estimates at individual pixels.
For the other two bio-optical provinces, interior and outflow shelves+basin, new algorithms may need to be developed. Initially, the OC4L algorithm was tuned with data primarily collected from the Beaufort and western Chukchi seas during late summer when imidazoline was generally dominated by CDOM (Wang et al., 2005). Later, the OC4L coefficients were modified by incorporating more high-latitude data from unreported locations (>55°N, n=686) ( Cota et al., 2004). While the initial version of OC4L may have been suitable for high CDOM environments like the interior shelves, it is unclear whether the modified form of OC4L is still the best algorithm for this province given that the sampling locations and dates of the in situ data used for calibration remains unknown. Finally, to our knowledge, no empirical Chl a algorithm has been developed for the outflow shelves+basin bio-optical province. Thus, further algorithm development will be required to capture the spatial heterogeneity of the Arctic Ocean and accurately retrieve Chl a and thus NPP in these provinces.
Arctic Ocean bio-optical properties deviate from the global mean because of phytoplankton pigment packaging and absorption by CDOM. At individual pixels, the net effect of these two factors results in either an overestimation or underestimation by the global ocean color algorithm. Based on the mean bio-optical conditions of Arctic Ocean sub-regions, we have divided the Arctic Ocean into three bio-optical provinces that correspond to three previously identified ecological regions: inflow shelves, interior shelves, and outflow shelves+basin (Carmack et al., 2006). These distinct bio-optical provinces suggest that three different empirical ocean color algorithms would be sufficient to characterize the spatially diverse bio-optical conditions throughout the Arctic Ocean. While our OC3L algorithm can be confidently used for inflow shelves, the interior and outflow shelves+basin require development of ocean-color algorithms specific to the conditions of each bio-optical province.
Given the optical complexity within the Arctic Ocean, semi-analytical algorithms, which combine empirical statistics and radiative-transfer theory to account for regional differences in absorption and backscattering, may be the preferred solution for ocean color remote sensing in the Arctic Ocean. However, significantly more in situ bio-optical measurements, especially in areas like the Eurasian shelves, are required for improved parameterization before semi-analytical algorithms can be considered a viable solution. Until that time, our study demonstrates that empirical algorithms, such as OC3L, can be effectively tuned to bio-optical provinces to provide accurate estimates of Chl a and NPP throughout the Arctic Ocean. Future sensors, such as the Ocean Ecosystem Spectroradiometer (OES) to be deployed on the upcoming ACE and PACE misisons, with very high spectral resolution including channels in the UV will have a much better chance of direct inversion of hyperspectral semi-analytical algorithms ( NASA PACE Science Definition Team, 2010).

Generate initial population: Let View the MathML sourceP(g)= p1(g),p2(g),…,pNP(g) denote the population after g generations (0 is an initial generation), where NP is the population size and View the MathML sourcepi(g)=(ai(g),ci(g),bi(g)) is a decision vector of the population, i=1,2,…,NPi=1,2,…,NP. The initial population (g=0g=0) is randomly generated from an uniform distribution as follows:
where sort()sort() represents a function that### sorts the vector elements in ascending order, to verify the property that f of Eq. (6) should be a monotonic increasing function. randi,j[0,1]randi,j[0,1], j=1,2,3j=1,2,3, represents an uniformly distributed random variable within the range [0,1][0,1] for each i and j.
Mutation process: In the Myriocin process, NP mutant vectors View the MathML sourcevi(g), i=1,2,…,NPi=1,2,…,NP, is generated from three individuals chosen randomly, with indexes r1r1, r2r2 and r3r3, using a scale factor F, as follows:
Note that indexes have to be randomly generated once for each mutant vector. The scale factor F is a predefined positive control parameter.
Recombination or crossover process: In this process, crossover operation is applied to each pair of the target vector View the MathML sourcepi(g) and its corresponding mutant vector View the MathML sourcevi(g) to generate a trial vector View the MathML sourceui(g) as follows:
Selection process: The target vector View the MathML sourcepi(g) or trial vector View the MathML sourceui(g) that generates a better solution will be selected as the target vector of the next generation View the MathML sourcepi(g+1). The selection formula is shown in Eq. (14):
The pseudo code of the complete algorithm is summarized in Algorithm 1. Fig. 3 shows an example of the membership function μCμC, which is generated automatically by the proposed algorithm 1, of the Speech matcher (PAC, GMM) in the XM2VTS-Benchmark database [27].
Pseudo code of the complete algorithm.
Pseudo code of the complete algorithm.
Figure options
Example of the membership function μC of the Speech matcher (PAC, GMM) in the …
Figure options
2.4. Fuzzy aggregation operator
Aggregation operations on fuzzy sets [28] are operations by which several fuzzy sets are combined in a desirable way to produce a single fuzzy set. Several studies have proposed deploying the fuzzy aggregation operators in a biometric score fusion problem [29] and [30]. These operations are mainly based on general concepts such as fuzzy rules and triangular norms (t-norms and t-conorms). However, the main challenge of the methods based on fuzzy rules is the number of rules used because archaeocyathids grows exponentially with the number of biometric matcher, and triangular norms have the same behavior whatever the values of the information to combine (i.e., t-norms are conjunctive in nature and t-conorms are disjunctive in nature). In this work, we use symmetric sums to combine membership functions because they could be conjunctive (if the sources of information have low conflicting evidences) or disjunctive (if the sources of information have high conflicting evidences) depending on the values of the variables involved.

3. Developed method
In this (±)-Nutlin-3 section, we describe the process of deriving the objective function for a generic fuzzy–possibilistic clustering problem and then we propose a method for finding a solution to it. We first specify the inputs to the problem in Section 3.1 and then outline the assessment of loss in Section 3.2. This section is followed by the brief introduction of the robust loss function utilized in this work in Section 3.3. Then, we develop a variant of the Alternating Optimization (AO) process in order to find a solution to the derived optimization problem in Section 3.4 and discuss the pruning method utilized in this work in Section 3.5. Subsequently, the outlier removal procedure utilized in this work is described in Section 3.6. Finally, the algorithm outline is summarized in Section 3.7.
3.1. Input specification
We assume that a mathematical model for the datums is given and denote datums as x or xnxn, as applicable. We also assume that a cluster model is provided and denote clusters as ψ or ψcψc, as required. In this work, we utilize a weighted set of datums, defined as,
Turn MathJax on
We denote the weight of X as,
Turn MathJax on
We assume that a real-valued distance function , ?(x,ψ)?(x,ψ), is defined on the datum x and the cluster representation ψ. We emphasize that this generalized assumption is a departure from prototype-based approaches which assume that x and ψ have identical mathematical models, generally as members of RkRk, and where the Euclidean distance function is adopted. We assume that the distance between a datum and a cluster is always non-negative. We also assume that the distance function is unbounded, i.e. for any cluster representation ψ and any positive value L, there exist infinite number of datums x for which ?(x,ψ)>L?(x,ψ)>L. As special cases, when the datum belongs to RkRk, the Euclidean Distance, any LrLr norm, and the Mahalanobis Distance are special cases of the notion of datum-to-cluster distance defined here. The corresponding cluster models in these cases would be ψ∈Rkψ∈Rk, ψ∈Rkψ∈Rk, and ψ identifying a pair of a member of RkRk and a k×kk×k covariance matrix, respectively.
We assume that ?(x,ψ)?(x,ψ) is differentiable in terms of ψ and that for any non-empty weighted set X, the following function of ψ,
Turn MathJax on
has one and only one minimizer which is also the solution to the following equation,
Turn MathJax on
Turn MathJax on
Turn MathJax on
respectively. We note that when a closed-form representation for Ψ(?)Ψ(?) is not available, conversion to a W-estimator can produce a procedural solution to (11) (refer to [87] and [88] for details).
We assume that a function Ψ°(?)Ψ°(?), which may depend on X, is given that produces an appropriate number of initial clusters. We address polymerase chain reaction (PCR) function as the cluster initialization function . We address the initial number of cluster as View the MathML sourceCˉ. Note that, View the MathML sourceCˉ is not required to be explicitly known. It is in fact the responsibility of Ψ°(?)Ψ°(?) to produce an initial number of clusters which is relevant to the problem class or problem instance.