Creation year

2025

378 record(s)
 
Type of resources
Available actions
Topics
Keywords
Contact for the resource
Provided by
Years
Formats
Representation types
Update frequencies
status
Service types
Scale
Resolution
From 1 - 10 / 378
  • '''DEFINITION''' Ocean heat content (OHC) is defined here as the deviation from a reference period (1993-2014) and is closely proportional to the average temperature change from z1 = 0 m to z2 = 700 m depth: OHC=∫_(z_1)^(z_2)ρ_0 c_p (T_yr-T_clim )dz [1] with a reference density of = 1030 kgm-3 and a specific heat capacity of cp = 3980 J kg-1 °C-1 (e.g. von Schuckmann et al., 2009). Time series of annual mean values area averaged ocean heat content is provided for the Mediterranean Sea (30°N, 46°N; 6°W, 36°E) and is evaluated for topography deeper than 300m. '''CONTEXT''' Knowing how much and where heat energy is stored and released in the ocean is essential for understanding the contemporary Earth system state, variability and change, as the oceans shape our perspectives for the future. The quality evaluation of MEDSEA_OMI_OHC_area_averaged_anomalies is based on the “multi-product” approach as introduced in the second issue of the Ocean State Report (von Schuckmann et al., 2018), and following the MyOcean’s experience (Masina et al., 2017). Six global products and a regional (Mediterranean Sea) product have been used to build an ensemble mean, and its associated ensemble spread. The reference products are: • The Mediterranean Sea Reanalysis at 1/24 degree horizontal resolution (MEDSEA_MULTIYEAR_PHY_006_004, DOI: https://doi.org/10.25423/CMCC/MEDSEA_MULTIYEAR_PHY_006_004_E3R1, Escudier et al., 2020) • Four global reanalyses at 1/4 degree horizontal resolution (GLOBAL_MULTIYEAR_PHY_ENS_001_031): GLORYS, C-GLORS, ORAS5, FOAM • Two observation based products: CORA (INSITU_GLO_PHY_TS_OA_MY_013_052) and ARMOR3D (MULTIOBS_GLO_PHY_TSUV_3D_MYNRT_015_012). Details on the products are delivered in the PUM and QUID of this OMI. '''CMEMS KEY FINDINGS''' The ensemble mean ocean heat content anomaly time series over the Mediterranean Sea shows a continuous increase in the period 1993-2022 at rate of 1.38±0.08 W/m2 in the upper 700m. After 2005 the rate has clearly increased with respect the previous decade, in agreement with Iona et al. (2018). '''DOI (product):''' https://doi.org/10.48670/moi-00261

  • '''DEFINITION''' The temporal evolution of thermosteric sea level in an ocean layer is obtained from an integration of temperature driven ocean density variations, which are subtracted from a reference climatology to obtain the fluctuations from an average field. The products used include three global reanalyses: GLORYS, C-GLORS, ORAS5 (GLOBAL_MULTIYEAR_PHY_ENS_001_031) and two in situ based reprocessed products: CORA5.2 (INSITU_GLO_PHY_TS_OA_MY_013_052) , ARMOR-3D (MULTIOBS_GLO_PHY_TSUV_3D_MYNRT_015_012). Additionally, the time series based on the method of von Schuckmann and Le Traon (2011) has been added. The regional thermosteric sea level values are then averaged from 60°S-60°N aiming to monitor interannual to long term global sea level variations caused by temperature driven ocean volume changes through thermal expansion as expressed in meters (m). '''CONTEXT''' The global mean sea level is reflecting changes in the Earth’s climate system in response to natural and anthropogenic forcing factors such as ocean warming, land ice mass loss and changes in water storage in continental river basins. Thermosteric sea-level variations result from temperature related density changes in sea water associated with volume expansion and contraction (Storto et al., 2018). Global thermosteric sea level rise caused by ocean warming is known as one of the major drivers of contemporary global mean sea level rise (Cazenave et al., 2018; Oppenheimer et al., 2019). '''CMEMS KEY FINDINGS''' Since the year 2005 the upper (0-2000m) near-global (60°S-60°N) thermosteric sea level rises at a rate of 1.3±0.3 mm/year. Note: The key findings will be updated annually in November, in line with OMI evolutions. '''DOI (product):''' https://doi.org/10.48670/moi-00240

  • Based on the consolidation of the Ifremer networks RESCO (https://doi.org/10.17882/53007) and VELYGER (https://doi.org/10.17882/41888), the general objective of the ECOSCOPA project is to analyze the causes of spatio-temporal variability of the main life traits (Larval stage - Recruitment - Reproduction - Growth – Survival – Cytogenetic anomalies) of the Pacific oyster in France and follow their evolution over the long term in the context of climate change. The high frequency environmental data are monitored since 2010 at several stations next to oyster farm areas in eight bays of the French coast (from south to north): Thau Lagoon and bays of Arcachon, Marennes Oléron, Bourgneuf, Vilaine, Brest, Mont Saint-Michel and Veys (see map below). Sea temperature and practical salinity are recorded at 15-mins frequency. For several sites, fluorescence and turbidity data are also available. Data are acquired with automatic probes directly put in oyster bags or fixed on metallic structure at 50 cm over the sediment bottom, except for Thau Lagoon whose probes are deployed at 2m below sea surface. Since 2010, several types of probes were used: STP2, STPS, SMATCH or WiSens CTD from NKE (www.nke-instrumentation.fr) and recently ECO FLNTU (www.seabird.com). The probes are regularly qualified by calibrations in the Ifremer coastal laboratories. Precision estimated of the complete data collection process is: temperature (±0.1°C), salinity (±0.5psu), in vivo fluorescence (±10%), turbidity (±10%). The data are qualified into several levels: 0-No Quality Check performed, 1-Good data, 2-Probably good data, 3-Probably bad data, 4-Bad data, 5-Value changed, 7-Nominal value, 8-Interpolated value, 9-Missing value. 

  • This visualization product displays the total abundance of marine macro-litter (> 2.5cm) per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys. EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale. Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some categories & some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Normalization of survey lengths to 100m & 1 survey / year: in some cases, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Finally, the median abundance for each beach and year is calculated from these normalized abundances per survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases. Percentiles 50, 75, 95 & 99 have been calculated taking into account MSFD data for all years. More information is available in the attached documents. Warning: the absence of data on the map does not necessarily mean that it does not exist, but that no information has been entered in the Marine Litter Database for this area.

  • '''DEFINITION''' The temporal evolution of thermosteric sea level in an ocean layer is obtained from an integration of temperature driven ocean density Static product based on Sentinel-2 (images from 202X to 2024). There are 4 bathymetry datasets, three from three different analysis methods and a merged one. Three methods: intertidal, optical (radiation transfer equation), wave kinematics (depth inversion from wave kinematics). It includes 4 datasets, with 2 variables each. The bathymetry for intertidal, optical, wave kinematics methods and the merged of the three, plus the quality indexes for each. '''DOI (product):''' https://doi.org/10.48670/mds-00364

  • Dataset summary Plankton and detritus are essential components of the Earth’s oceans influencing biogeochemical cycles and carbon sequestration. Climate change impacts their composition and marine ecosystems as a whole. To improve our understanding of these changes, standardized observation methods and integrated global datasets are needed to enhance the accuracy of ecological and climate models. Here, we present a global dataset for plankton and detritus obtained by two versions of the Underwater Vision Profiler 5 (UVP5). This release contains the images classified in 33 homogenized categories, as well as the metadata associated with them, reaching 3,114 profiles and ca. 8 million objects acquired between 2008-2018 at global scale. The geographical distribution of the dataset is unbalanced, with the Equatorial region (30° S - 30° N) being the most represented, followed by the high latitudes in the northern hemisphere and lastly the high latitudes in the Southern Hemisphere. Detritus is the most abundant category in terms of concentration (90%) and biovolume (95%), although its classification in different morphotypes is still not well established. Copepoda was the most abundant taxa within the plankton, with Trichodesmium colonies being the second most abundant. The two versions of UVP5 (SD and HD) have different imagers, resulting in a different effective size range to analyse plankton and detritus from the images (HD objects >600 µm, SD objects >1 mm) and morphological properties (grey levels, etc.) presenting similar patterns, although the ranges may differ. A large number of images of plankton and detritus will be collected in the future by the UVP5, and the public availability of this dataset will help it being utilized as a training set for machine learning and being improved by the scientific community. This will reduce uncertainty by classifying previously unclassified objects and expand the classification categories, ultimately enhancing biodiversity quantification. Data tables The data set is organised according to: - samples : Underwater Vision Profiler 5 profiles, taken at a given point in space and time. - objects : individual UVP images, taken at a given depth along the each profile, on which various morphological features were measured and that where then classified taxonomically in EcoTaxa. samples and objects have unique identifiers. The sample_id is used to link the different tables of the data set together. All files are Tab separated values, UTF8 encoded, gzip compressed. samples.tsv.gz - sample_id    <int>    unique sample identifier - sample_name    <text>    original sample identifier - project    <text>    EcoPart project title - lat, lon    <float>    location [decimal degrees] - datetime    <text>    date and time of start of profile [ISO 8601: YYYY-MM-DDTHH:MM:SSZ] - pixel_size    <float>    size of one pixel [mm] - uvp_model    <text>    version of the UVP: SD: standard definition, ZD: zoomed, HD: high definition samples_volume.tsv.gz Along a profile, the UVP takes many images, each of a fixed volume. The profiles are cut into 5 m depth bins in which the number of images taken is recorded and hence the imaged volume is known. This is necessary to compute concentrations. - sample_id    <int>    unique sample identifier - mid_depth_bin    <float>    middle of the depth bin (2.5 = from 0 to 5 m depth) [m] - water_volume_imaged    <float>    volume imaged = number of full images × unit volume [L] objects.tsv.gz - object_id    <int>     unique object identifier - object_name    <text>     original object identifier - sample_id    <int>     unique sample identifier - depth    <float>    depth at which the image was taken [m] - mid_depth_bin    <float>    corresponding depth bin [m]; to match with samples_volumes - taxon    <text>     original taxonomic name as in EcoTaxa; is not consistent across projects - lineage    <text>     taxonomic lineage corresponding to that name - classif_author    <text>     unique, anonymised identifier of the user who performed this classification - classif_datetime    <text>     date and time at which the classification was - group    <text>     broader taxonomic name, for which the identification is consistent over the whole dataset - group_lineage    <text>     taxonomic lineage corresponding to this broader group - area_mm2    <float>    measurements on the object, in real worl units (i.e. comparable across the whole dataset) … - major_mm    <float> - area    <float>    measurements on the objet, in [pixels] and therefore not directly comparable among the different UVP models and units - mean    <float> … - skeleton_area    <float> properties_per_bin.tsv.gz The information above allows to compute concentrations, biovolumes, and average grey level within a given depth bin. The code to do so is in `summarise_objects_properties.R`. - sample_id    <int>     unique sample identifier - depth_range    <text>     range of depth over which the concentration/biovolume are computed: (start,end], in [m] where `(` means not including, `]` means including - group    <text>     broad taxonomic group - concentration    <float>    concentration [ind/L] - biovolume    <float>    biovolume [mm3/L] - avg_grey    <float>    average grey level of particles [no unit; 0 is black, 255 is white] ODV_biovolumes.txt, ODV_concentrations.txt, ODV_grey_levels.txt This is the same information as above, formatted in a way that Ocean Data View https://odv.awi.de can read. In ODV, go to Import > ODV Spreadsheet and accept all default choices. Images The images are provided in a separate, much larger, zip file. They are stored with the format `sample_id/object_id.jpg`, where `sample_id` and `object_id` are the integer identifiers used in the data tables above.

  • In October 2019 we chose 15 sites from the 2019 EVHOE survey for environmental DNA (eDNA) sampling. The French international EVHOE bottom trawl survey is carried out annually during autumn in the BoB to monitor demersal fish resources. At each site, we sampled seawater using Niskin bottles deployed with a circular rosette. There were nine bottles on the rosette, each of them able to hold ∼5 l of water. At each site, we first cleaned the circular rosette and bottles with freshwater, then lowered the rosette (with bottles open) to 5 m above the sea bottom, and finally closed the bottles remotely from the boat. The 45 l of sampled water was transferred to four disposable and sterilized plastic bags of 11.25 l each to perform the filtration on-board in a laboratory dedicated to the processing of eDNA samples. To speed up the filtration process, we used two identical filtration devices, each composed of an Athena® peristaltic pump (Proactive Environmental Products LLC, Bradenton, Florida, USA; nominal flow of 1.0 l min–1 ), a VigiDNA 0.20 μm filtration capsule (SPYGEN, le Bourget du Lac, France), and disposable sterile tubing. Each filtration device filtered the water contained in two plastic bags (22.5 l), which represent two replicates per sampling site. We followed a rigorous protocol to avoid contamination during fieldwork, using disposable gloves and single-use filtration equipment and plastic bags to process each water sample. At the end of each filtration, we emptied the water inside the capsule that we replaced by 80 ml of CL1 conservation buffer and stored the samples at room temperature following the specifications of the manufacturer (SPYGEN, Le Bourget du Lac, France). We processed the eDNA capsules at SPYGEN, following the protocol proposed by Polanco-Fernández et al., (2020). Half of the extracted DNA was processed by Sinsoma using newly developped ddPCR assays for European seabass (Dicentrachus labrax), European hake (Merluccius merluccius) and blackspot seabream (Pagellus bogaraveo).  The other half of the extracted DNA was analysed using metabarcoding with teleo primer. The raw metabarcoding data set is available at https://www.doi.org/10.16904/envidat.442 Bottom trawling using a GOV trawl was carried out before or after water sampling. The catch was sorted by species and catches in numbers and weight were recorded. No blackspot seabream individuals were caught.   Data content: * ddPCR/: contains the ddPCR counts and DNA concentrations for each sample and species. * SampleInfo/: contains the filter volume for each eDNA sample. * StationInfo/: contains metadata related to the data collected in the field for each filter. * Metabarcoding/: contains metabarcoding results for teleoprimer. * Trawldata/: contains catch data in numbers and weight (kg).      

  • 210Pb, 226Ra and 137Cs were measured by non-destructive gamma spectrometry on marine sediment cores, collected during RIKEAU 2002 cruise on board r/v Thalia, on the shelf of the Bay of Biscay

  • EMODnet (Chemical data) Map Server with ocean climatologies.

  • '''Short description:''' Near-Real-Time multi-mission global satellite-based spectral integral parameters. Only valid data are used, based on the L3 corresponding products. Included wave parameters are partition significant wave height, partition peak period and partition peak or principal direction. Those parameters are propagated in space and time at a 3-hour timestep and on a regular space grid, providing information of the swell propagation characteristics, from source to land. The ouput products corresponds to one file per month gathering all the swell systems at a global scale. This product is processed by the WAVE-TAC multi-mission SAR and CFOSAT/SWIM data processing system to serve in near-real time the main operational oceanography and climate forecasting centers in Europe and worldwide. It processes data from the following missions: SAR (Sentinel-1A and Sentinel-1B) and CFOSAT/SWIM. All the spectral parameter measurements are optimally interpolated using swell observations belonging to the same swell field. The spectral data processing system produces wave integral parameters by partition (partition significant wave height, partition peak period and partition peak or principal direction) and the associated standard deviation and density of propagated observations. '''DOI (product) :''' https://doi.org/10.48670/moi-00175