Creation year

2025

343 record(s)
 
Type of resources
Available actions
Topics
Keywords
Contact for the resource
Provided by
Years
Formats
Representation types
Update frequencies
status
Service types
Scale
Resolution
From 1 - 10 / 343
  • In October 2019 we chose 15 sites from the 2019 EVHOE survey for environmental DNA (eDNA) sampling. The French international EVHOE bottom trawl survey is carried out annually during autumn in the BoB to monitor demersal fish resources. At each site, we sampled seawater using Niskin bottles deployed with a circular rosette. There were nine bottles on the rosette, each of them able to hold ∼5 l of water. At each site, we first cleaned the circular rosette and bottles with freshwater, then lowered the rosette (with bottles open) to 5 m above the sea bottom, and finally closed the bottles remotely from the boat. The 45 l of sampled water was transferred to four disposable and sterilized plastic bags of 11.25 l each to perform the filtration on-board in a laboratory dedicated to the processing of eDNA samples. To speed up the filtration process, we used two identical filtration devices, each composed of an Athena® peristaltic pump (Proactive Environmental Products LLC, Bradenton, Florida, USA; nominal flow of 1.0 l min–1 ), a VigiDNA 0.20 μm filtration capsule (SPYGEN, le Bourget du Lac, France), and disposable sterile tubing. Each filtration device filtered the water contained in two plastic bags (22.5 l), which represent two replicates per sampling site. We followed a rigorous protocol to avoid contamination during fieldwork, using disposable gloves and single-use filtration equipment and plastic bags to process each water sample. At the end of each filtration, we emptied the water inside the capsule that we replaced by 80 ml of CL1 conservation buffer and stored the samples at room temperature following the specifications of the manufacturer (SPYGEN, Le Bourget du Lac, France). We processed the eDNA capsules at SPYGEN, following the protocol proposed by Polanco-Fernández et al., (2020). Half of the extracted DNA was processed by Sinsoma using newly developped ddPCR assays for European seabass (Dicentrachus labrax), European hake (Merluccius merluccius) and blackspot seabream (Pagellus bogaraveo).  The other half of the extracted DNA was analysed using metabarcoding with teleo primer. The raw metabarcoding data set is available at https://www.doi.org/10.16904/envidat.442 Bottom trawling using a GOV trawl was carried out before or after water sampling. The catch was sorted by species and catches in numbers and weight were recorded. No blackspot seabream individuals were caught.   Data content: * ddPCR/: contains the ddPCR counts and DNA concentrations for each sample and species. * SampleInfo/: contains the filter volume for each eDNA sample. * StationInfo/: contains metadata related to the data collected in the field for each filter. * Metabarcoding/: contains metabarcoding results for teleoprimer. * Trawldata/: contains catch data in numbers and weight (kg).      

  • The ARCHYD dataset, which have been collected since 1988, represents the longest long-term hydrologic data sets in Arcachon Bay. The objectives of this monitoring programme are to assess the influence of oceanic and continental inputs on the water quality of the bay and their implications on biological processes. It also aims to estimate the effectiveness of management policies in the bay by providing information on trends and/or shifts in pressure, state, and impact variables. Sampling is carried on stations spread across the entire bay, but since 1988, the number and location of stations have changed slightly to better take into account the gradient of ocean and continental inputs. In 2005, the ARCHYD network was reduced to 8 stations that are still sampled by Ifremer to date. All the stations are sampled at a weekly frequency, at midday, alternately around the low spring tide and the high neap tide. Data are complementary to REPHY dataset. Physico-chemical measures include temperature, salinity, turbidity, suspended matters (organic, mineral), dissolved oxygen and dissolved inorganic nutrients (ammonium, nitrite+nitrate, phosphate, silicate). Biological measures include pigment proxies of phytoplankton biomass and state (chlorophyll a and phaeopigment).

  • This visualization product displays the single use plastics (SUP) related plastic items abundance of marine macro-litter (> 2.5cm) per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys. EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale. Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Selection of SUP related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata). The selection was adapted to the Joint list of litter categories SUP identification and therefore contains some differences with the selection made for previously published versions of this product; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of SUP items of the survey (normalized by 100 m) = Number of SUP related items of the survey x (100 / survey length) Then, this normalized number of¨SUP related items is summed to obtain the total normalized number of SUP related items for each survey. Finally, the median abundance of SUP related items for each beach and year is calculated from these normalized abundances of SUP related items per survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases. Percentiles 50, 75, 95 & 99 have been calculated taking into account SUP related plastic items from MSFD data for all years. More information is available in the attached documents. Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.

  • This visualization product displays the density of floating micro-litter per net normalized per km² per year from research and monitoring protocols. EMODnet Chemistry included the collection of marine litter in its 3rd phase. Before 2021, there was no coordinated effort at the regional or European scale for micro-litter. Given this situation, EMODnet Chemistry proposed to adopt the data gathering and data management approach as generally applied for marine data, i.e., populating metadata and data in the CDI Data Discovery and Access service using dedicated SeaDataNet data transport formats. EMODnet Chemistry is currently the official EU collector of micro-litter data from Marine Strategy Framework Directive (MSFD) National Monitoring activities (descriptor 10). A series of specific standard vocabularies or standard terms related to micro-litter have been added to SeaDataNet NVS (NERC Vocabulary Server) Common Vocabularies to describe the micro-litter. European micro-litter data are collected by the National Oceanographic Data Centres (NODCs). Micro-litter map products are generated from NODCs data after a test of the aggregated collection including data and data format checks and data harmonization. A filter is applied to represent only micro-litter sampled according to research and monitoring protocols as MSFD monitoring. Densities were calculated for each net using the following calculation: Density (number of particles per km²) = Micro-litter count / (Sampling effort (km) * Net opening (cm) * 0.00001) When information about the sampling effort (km) was lacking and point coordinates were known (start and end of the sampling), the sampling effort was calculated using the PostGIS ST_DistanceSpheroid function with a WGS84 measurement spheroid. When the number of micro-litters or the net opening was not filled, it was not possible to calculate the density. Percentiles 50, 75, 95 & 99 have been calculated taking into account data for all years. Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the National Oceanographic Data Centre (NODC) for this area.

  • '''DEFINITION''' Important note to users: These data are not to be used for navigation. The data is 100 m resolution and as high quality as possible. It has been produced with state-of-the-art technology and validated to the best of the producer’s ability and where sufficient high-quality data were available. These data could be useful for planning and modelling purposes. The user should independently assess the adequacy of any material, data and/or information of the product before relying upon it. Neither Mercator Ocean International/Copernicus Marine Service nor the data originators are liable for any negative consequences following direct or indirect use of the product information, services, data products and/or data. Product overview: This is a satellite derived bathymetry product covering the global coastal area (where data retrieval is possible), with 100 m resolution, based on Sentinel-2. This global coastal product has been developed based on 3 methodologies: Intertidal Satellite-Derived Bathymetry; Physics-based optical Satellite-Derived Bathymetry from RTE inversion; and Wave Kinematics Satellite-Derived Bathymetry from wave dispersion. There is one dataset for each of the methods (including a quality index based on uncertainty) and an additional one where the three datasets were merged (also includes a quality index). Using their expertise and special techniques the consortium tried to achieve an optimal balance between coverage and data quality. '''DOI (product):''' https://doi.org/10.48670/mds-00364

  • This dataset contains all satellite altimeter wave heights above 9 m, from the following satellite missions: ERS-1, ERS-2, Topex-Poseidon (Topex only), Envisat, SARAL, Jason-1, Jason-2, Jason-3, Sentinel-3A, Sentinel-3B, Sentinel-6A, Cryosat-2, CFOSAT, SWOT. Storm event identification used the DetectHsStorm package developed by M. De Carlo and F. Ardhuin (  https://github.com/ardhuin/) . This data can be combined with modeled storm tracks (see F. Ardhuin, M. De Carlo, Storm tracks based on wave heights from LOPS WAVEWATCH III hindcast and ERA5 reanalysis, years 1991-2024, SEANOE (2025). doi: 10.17882/105148 )

  • These rasters correspond to the environmental predictors used in the production of Mediterranean bioregions of megabenthic communities

  • This visualization product displays the density of floating micro-litter per net normalized per m³ per year from specific protocols different from research and monitoring protocols. EMODnet Chemistry included the collection of marine litter in its 3rd phase. Before 2021, there was no coordinated effort at the regional or European scale for micro-litter. Given this situation, EMODnet Chemistry proposed to adopt the data gathering and data management approach as generally applied for marine data, i.e., populating metadata and data in the CDI Data Discovery and Access service using dedicated SeaDataNet data transport formats. EMODnet Chemistry is currently the official EU collector of micro-litter data from Marine Strategy Framework Directive (MSFD) National Monitoring activities (descriptor 10). A series of specific standard vocabularies or standard terms related to micro-litter have been added to SeaDataNet NVS (NERC Vocabulary Server) Common Vocabularies to describe the micro-litter. European micro-litter data are collected by the National Oceanographic Data Centres (NODCs). Micro-litter map products are generated from NODCs data after a test of the aggregated collection including data and data format checks and data harmonization. A filter is applied to represent only micro-litter sampled according to a very specific protocol such as the Volvo Ocean Race (VOR) or Oceaneye. Densities were calculated for each net using the following calculation: Density (number of particles per m³) = Micro-litter count / Sampling effort (m³) When the number of micro-litters was not filled, it was not possible to calculate the density. Percentiles 50, 75, 95 & 99 have been calculated taking into account data for all years. Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the National Oceanographic Data Centre (NODC) for this area.

  • A quantitative understanding of the integrated ocean heat content depends on our ability to determine how heat is distributed in the ocean and what are the associated coherent patterns. This dataset contains the results of the Maze et al., 2017 (Prog. Oce.) study demonstrating how this can be achieved using unsupervised classification of Argo temperature profiles. The dataset contains: - A netcdf file with classification~results (labels and probabilities) and coordinates (lat/lon/time) of 100,684 Argo temperature profiles in North Atlantic. - A netcdf file with a Profile Classification Model (PCM) that can be used to classify new temperature profiles from observations or numerical models. The classification method used is a Gaussian Mixture Model that decomposes the Probability Density Function of the dataset into a weighted sum of Gaussian modes. North Atlantic Argo temperature profiles between 0 and 1400m depth were interpolated onto a regular 5m grid, then compressed using Principal Component Analysis and finally classified using a Gaussian Mixture Model. To use the netcdf PCM file to classify new data, you can checkout our PCM Matlab and Python toolbox here: https://github.com/obidam/pcm