General setup¶
This brief tutorial explains the general setup you will have to run each time you start a new Python session.
First fire up a terminal and start your interactive Python session:
ipython
First we’ll import the relevant libraries
import fafbseg
import pymaid
Next get connections to the manual
and autoseg
CATMAID instances set up
(make sure to replace HTTP_USER
, HTTP_PW
and API_TOKEN
with
the corresponding credentials):
manual = pymaid.CatmaidInstance('https://neuropil.janelia.org/tracing/fafb/v14',
api_token='API_TOKEN',
http_user='HTTP_USER',
http_password='HTTP_PW',
caching=False,
max_threads=20)
auto = pymaid.CatmaidInstance('https://neuropil.janelia.org/tracing/fafb/v14-seg-li-190805.0',
api_token='API_TOKEN',
http_user='HTTP_USER',
http_password='HTTP_PW',
caching=False,
max_threads=20)
Last but not least, we need to tell fafbseg
what source to use to query the
segmentation data. For this you use one of the fafbseg.use_....
functions -
which one depends on the data source you have available. See below for an
explanation.
Choosing a segmentation source¶
When importing neurons into another CATMAID instance, fafbseg
will suggest
potentially overlapping neurons for merging. For this it uses the
segmentation data from Google to determine if skeletons are actually in the same
“segment” or just adjacent.
There are three options for where to get that data from:
Source |
Description |
Advantages |
Disadvantage |
---|---|---|---|
|
Google has put their segmentation data on their cloud storage and we can use CloudVolume from the Seung lab to query it. |
does not need any special permissions |
slow |
|
At the highest resolution (i.e. not downsampled) segmentation data for FAFB is ~850Gb. Given that the segmentation data is publicly available, you can download it and use that local copy to fetch segmentation IDs. |
fast |
requires SSDs (USB or internal) |
|
This is the the same API that neuroglancer uses to browse the segmentation data. |
blazingly fast |
needs permission to access the brainmaps API (see brainmappy for details) |
|
If you have a machine that can act as a server, you could download a local copy of the data and serve it e.g. via CloudVolumeServer similar to what brainmaps does. |
potentially faster than brainmaps |
requires server & know-how |
Using Google Storage¶
This is the easiest solution as it does not need special permission. To set it
up run fafbseg.use_google_storage()
at start up:
# Accessing the most recent autoseg data
fafbseg.use_google_storage("https://storage.googleapis.com/fafb-ffn1-20190805/segmentation")
Using local copy¶
An alternative to slow remote access via Google Storage is to download the data locally. See here for a brief explanation on how to do this.
Once you have set up a local copy of the segmentation data, you use fafbseg like so:
# Accessing the most recent autoseg data
fafbseg.use_local_data("path/to/segmentation")
Using brainmaps¶
You will need the brainmappy <https://github.com/schlegelp/brainmappy>_ library for this. If you haven’t already installed it, run this in a terminal:
pip3 install git+git://github.com/schlegelp/brainmappy@master
To tell fafbseg
to use brainmaps to query segmentation data use
fafbseg.use_brainmaps()
(see
brainmappy for explanation
on credentials).
If you are doing this for the very first time you also need to provide a
client_secret.json
file:
fafbseg.use_brainmaps('772153499790:fafb_v14:fafb-ffn1-20190805',
client_secret='path/to/client_secret.json')
From now on credentials are stored locally and in the future you can simply run:
fafbseg.use_brainmaps('772153499790:fafb_v14:fafb-ffn1-20190805')
Tip
Each CATMAID autoseg
instance contains data for a specific segmentation
volume. You have to make sure that the volume set via
fafseg.use_...
matches the segmentation used to generate the
skeletons in that autoseg
CATMAID instance.
Using self-hosted remote solution¶
If you are self-hosting the data, you will need to pass a URL
to fafbseg.use_remote_service()
. The service behind the URL has to
accept a list of x/y/z locations as POST and return a list of segmentation IDs
in the same order:
fafbseg.use_remote_service('https://my-server.com/seg/values')
Alternatively, set an environment variable:
EXPORT SEG_ID_URL="https://my-server.com/seg/values"
If you have an environment variable set, you an simply run:
fafbseg.use_remote_service()
If you have set up one of the above explained means to access the segmentation data, you’re all done and ready to get to work!
Tip
ipython
offers auto-completion: try for example typing in
fafbseg.use_
and then hitting TAB. There is also a neat feature for
repeating past commands: type in manual =
and hit the up arrow on your
keyboard to cycle through all past commands that match. This is very useful
for re-occurring code like this general setup.