TOMCAT on the NOC news

We feature on the NOC website and Twitter.

#oceancarbon

TEP – Review paper

It was great to meet everyone this week and I’m really excited about the work we will produce. Thanks again to Sari for organising such a fab meeting. Our offices are still full of cake!

One thing that came up many times in the meeting was TEP, and how it often goes undetected using optical instruments. Or they might assume a TEP particle is many smaller ones rather than one larger aggregated particle. I realised after we left the meeting that this is probably an important point to highlight in the review paper. Detecting smaller particles and excluding TEP reduces estimated sinking rates and estimated organic carbon content. I thought we could have a small section in the discussion somewhere under fluxes/processes to highlight the complications of TEP. Really we should probably start to use a generic term e.g. ‘gelatinous material’ as TEP is just one of many kinds of sticky, transparent matter.

agg

d and e are examples of manually classified ‘gelatinous’ particles. 10 % is an underestimation and I’m sure many of the smaller particles would have been formed in a similar manner but its harder to see, so the results are biased to larger particles.

Looking back at my FlowCAM data ‘gelatinous’ particles (classified manually myself) comprised 10 % of particles (n=810, likely an underestimation), sank faster than the other particles (gel=124 m/d, others=99 m/d, p=0.08) and were significantly larger (ESD gel=848 um, others=463um, p<0.001).

Food for thought!

Emma

 

 

 

Data processing

I was on a course about metabolomics this week and they had a very interesting session on data processing. We were discussing how data processing can alter the interpretation, and that often people do not fully report how they have handled the data (quality control, baseline shift correction, outlier treatment, etc.) The metabolomics group at Birmingham are now trying to get everyone to record their workflow in detail and attach it as supplementary material to their work, so that anyone can easily reproduce their results.

galaxy
Proposal background

Maybe we should aim to encourage people to do a similar thing for optical data. (Obviously building a tool like Birmingham did is completely out of the scope of this group!).
Any thoughts?

#dataprocessing #data #workflow

New paper

There is new article on backscatter:
Optical classification and characterization of marine particle assemblages within the western Arctic Ocean
(by Neukermans, Reynolds & Stramski)

#newpaper

September is coming

I am so excited about our first meeting in September! We have a great programme and exciting discussions ahead of us. Also, I received great interested from several scientists around the world. Please let me know if there is anyone else who is interested in our work and I will add them to the email list!

Device details

Hi everyone,

For those of you who use a specific optical device, could you write a short section about it on the ‘Device’ page?
It would also be great to have some technical details (though they could be added later), incl.

  • min and max particle size that can be captured
  • volume of water that is captured
  • raw output format (e.g. image as .jpg, etc.)
  • level of identification; e.g. broad categories (particle, zooplankton), fine identification (down to genus level)
  • deployment details (towed, vertically lowered, on autonomous platform, etc)
  • deployment restrictions (depth? temperature?)
  • power consumption
  • need for calibration

Any other ideas?

Devices:

others?

Cheers,
Sari

Q5. What are the most useful ways of presenting size distributions?

And How should slope determinations be made?

George’s comment (http://wp.me/p7ffKH-B): Here, I see that alot of observations are presented with incorrect units assigned to the observations. It makes results difficult to interpret. I also see that people are locked into ways of presenting data that mask the trends. I personally think that differential number spectra mask alot of important information because the values span such a large range of values. Plus, not all values are equally certain. The small number of large particles in a given set of measurements makes their values more uncertain than the small, which affects how one should calculate slopes to the data.

#sizespectrum #sizedistribution