Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • master
1 result

Target

Select target project
  • ckaufmann/imagefusion
1 result
Select Git revision
  • master
1 result
Show changes

Commits on Source 5

cmake_minimum_required(VERSION 3.12)
cmake_minimum_required(VERSION 3.18)
# --------------------------------------------------------------------------------------------------
......@@ -156,10 +156,30 @@ endif()
find_package(spdlog QUIET)
if(spdlog_FOUND)
if(fmt_FOUND)
set(SPDLOG_TEST_LIBS spdlog fmt)
else()
set(SPDLOG_TEST_LIBS spdlog)
endif()
# # This syntax requires CMake 3.25, which ships with Ubuntu 23.04 and Debian bookworm...
# try_compile(SPDLOG_COMPILE_SUCCESS
# PROJECT "spdlogtest"
# SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/cmake/spdlog-compile-test")
# # ...use older syntax for now.
try_compile(SPDLOG_COMPILE_SUCCESS
"${CMAKE_CURRENT_BINARY_DIR}/spdlog-compile-test"
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/spdlog-compile-test/main.cpp"
LINK_LIBRARIES ${SPDLOG_TEST_LIBS}
OUTPUT_VARIABLE TESTOUTPUT
CXX_STANDARD ${CMAKE_CXX_STANDARD}
CXX_STANDARD_REQUIRED ON)
endif()
if(spdlog_FOUND AND SPDLOG_COMPILE_SUCCESS)
get_target_property(spdlog_LOCATION spdlog::spdlog IMPORTED_LOCATION_NONE)
message(STATUS "Found spdlog v${spdlog_VERSION}: ${spdlog_LOCATION}")
else()
option(SPDLOG_INSTALL "Install dependency SPDLOG" ON)
FetchContent_Declare(spdlog
GIT_REPOSITORY https://github.com/gabime/spdlog.git
GIT_TAG v1.12.0)
......
......@@ -7,19 +7,21 @@ It is written in C++, but there are interfaces for R (planned via CRAN) and for
A more detailed description about the library can be found in the API documentation. There is also a tutorial about the basic usage. Please have a look. See below for how to make the documentation.
## Build
Note: This repository uses submodules, so clone with:
```bash
git clone --recurse-submodules URL
# or in two separate steps:
git clone URL
cd imagefusion
git submodule update --init --recursive
```
This project uses CMake as build system and should work on Windows, Linux and Mac. In order to compile the library you need a compatible compiler like gcc or clang (Boost did not support clang on Windows some time ago), which supports C++17. Then you need some dependencies:
* OpenCV
* GDAL
* Boost (Exception, ICL, Iterator, Tokenizer)
......@@ -28,6 +30,7 @@ This project uses CMake as build system and should work on Windows, Linux and Ma
* Boost Unit Test Framework (optional, only for test configuration)
For the documentation you need:
* doxygen
* graphwiz (for dot diagrams)
* dia (for dia diagrams)
......@@ -38,106 +41,150 @@ For the documentation you need:
Independent of the platform, you can build and install imagefusion using conda-build. The dependencies are all available as conda packages on conda-forge and will be installed automatically.
$ conda update conda
```bash
conda update conda
```
Install the build tool in your `base` environment:
$ conda install conda-build conda-verify
```bash
conda install conda-build conda-verify
```
Then go to the directory containing the conda recipe
$ cd python/conda.recipe
```bash
cd python/conda.recipe
```
Build the package using conda-forge dependencies. Either add the conda-forge channel permanently, as recommended [here](https://conda-forge.org/docs/user/introduction.html#how-can-i-install-packages-from-conda-forge):
$ conda config --add channels conda-forge
$ conda config --set channel_priority strict
$ conda build .
```bash
conda config --add channels conda-forge
conda config --set channel_priority strict
conda build .
```
However, this might lead to issues later on with other packages. So you can also choose conda-forge channel only for building:
$ conda build -c conda-forge .
```bash
conda build -c conda-forge .
```
This takes a while. When it is finished, you can install it into a new environment (`--use-local` and `-c local` do not work, currently, see: [conda/conda/7758](https://github.com/conda/conda/issues/7758) [conda/conda/7024](https://github.com/conda/conda/issues/7024)):
$ conda create --name imfu -c conda-forge -c ${CONDA_PREFIX}/conda-bld/ imagefusion # [linux or osx]
$ conda create --name imfu -c conda-forge -c %CONDA_PREFIX%/conda-bld/ imagefusion # [win]
$ conda activate imfu
```bash
conda create --name imfu -c conda-forge -c ${CONDA_PREFIX}/conda-bld/ imagefusion # [linux or osx]
conda create --name imfu -c conda-forge -c %CONDA_PREFIX%/conda-bld/ imagefusion # [win]
conda activate imfu
```
Or install it into the currently active environment:
$ conda install -c conda-forge -c ${CONDA_PREFIX}/conda-bld/ imagefusion # [linux or osx]
$ conda install -c conda-forge -c %CONDA_PREFIX%/conda-bld/ imagefusion # [win]
```bash
conda install -c conda-forge -c ${CONDA_PREFIX}/conda-bld/ imagefusion # [linux or osx]
conda install -c conda-forge -c %CONDA_PREFIX%/conda-bld/ imagefusion # [win]
```
Or to reinstall:
$ conda install -c ${CONDA_PREFIX}/conda-bld/ --force-reinstall imagefusion-python imagefusion-utils libimagefusion # [linux or osx]
$ conda install -c %CONDA_PREFIX%/conda-bld/ --force-reinstall imagefusion-python imagefusion-utils libimagefusion # [win]
```bash
conda install -c ${CONDA_PREFIX}/conda-bld/ --force-reinstall imagefusion-python imagefusion-utils libimagefusion # [linux or osx]
conda install -c %CONDA_PREFIX%/conda-bld/ --force-reinstall imagefusion-python imagefusion-utils libimagefusion # [win]
```
Now everything is there; utilities like `starfm`, the library, a CMake file to import the library in a C++ project and, most importantly, the python interface. Try it! Let's import the python interface:
```bash
(imfu) $ cd ../examples # go to python/examples
(imfu) $ python starfm-example.py # or execute another example
```
Or execute a binary utility:
```bash
(imfu) $ starfm --help
```
### Linux
Ubuntu (20.04) and Debian (Buster) are supported, others should work with similar packages. Building in Linux is very easy. Assuming Ubuntu, install the build dependencies with
$ sudo apt install cmake g++ make libopencv-dev libgdal-dev libboost-all-dev libarmadillo-dev libopenblas-base libgomp1
```bash
sudo apt install cmake g++ make libopencv-dev libgdal-dev libboost-all-dev libarmadillo-dev libopenblas-base libgomp1
# if you want to build with clang++ install these (optional, you would also have to set it as default compiler afterwards):
$ sudo apt install clang libomp-dev
sudo apt install clang libomp-dev
```
and, optionally, the documentation dependencies with
$ sudo apt install doxygen graphviz dia texlive-full
```bash
sudo apt install doxygen graphviz dia texlive-full
```
Now go to the folder where this readme is located. Then do the following
$ mkdir build
$ cd build
$ cmake ..
$ make all -j 4 # -j 4 means 4 compiler jobs in parallel
```bash
mkdir build
cd build
cmake ..
make all -j 4 # -j 4 means 4 compiler jobs in parallel
```
which will build the library in `lib/` and the utilities in `bin/`. For installation you can either install it directly or make a debian package. For direct installation use
$ sudo make install
```bash
sudo make install
# or when you are logged in as root with `su -` or `sudo -i` just:
$ make install
make install
```
The default install path is `/usr/local`, which is the correct one for self compiled software. The `all` target includes the python interface, which gets installed to `/usr/local/lib/python3/dist-packages/imagefusion/` by default when using the python from the linux distribution. To change the install path e. g. to `~/.local` for a user installation use
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$HOME/.local ..
$ make install
```bash
cmake -DCMAKE_INSTALL_PREFIX:PATH=$HOME/.local ..
make install
```
or whatever you prefer as user install path. This does usually just work (it will install `libimagefusion.so` to `~/.local/lib` and let the binaries find that library; you do *not* need to change `LD_LIBRARY_PATH` for the imagefusion binaries). You might want to add the local path `~/.local/bin` in your profile file `~/.profile` and source it with `source ~/.profile`.
You can also make a debian package. To make and install use
$ make package
$ sudo apt install ./imagefusion*.deb
```bash
make package
sudo apt install ./imagefusion*.deb
```
To build and execute the tests, use
$ make testimagefusion
$ ./testimagefusion
```bash
make testimagefusion
./testimagefusion
```
To build the documentation (open with `doc/doc.html` in the source directory, not the build directory), use one of the following:
$ make doc
```bash
make doc
# or for devs:
$ make docinternal
make docinternal
```
#### Python interface
The `sudo make install` should already install the python interface in `/usr/local/lib/python3/dist-packages/imagefusion/`. With conda the requirements are easier to handle, because they are available as conda packages. For installing imagefusion with conda, see [conda build](#conda). You can also try pip:
```bash
cd imagefusion # make sure you are in the repository directory
pip install . # or: pip3 if you use the pip version from the linux distribution
```
After installation you might want to test the python installation (note, conda build automatically does this during building):
```bash
# in the repository directory:
python -m unittest discover -s tests/ -p "*_test.py"
```
#### Troubleshooting
......@@ -146,19 +193,23 @@ After installation you might want to test the python installation (note, conda b
* Check whether the `*-dev` and normal version of the libraries are installed (see above)
* If there is a multi-user installation of Anaconda or Miniconda (e. g. in `/opt`), the GDAL CMake file might find the library shipped with the python package instead of the one installed with the package mangager (e. g. apt). To sort this out remove the `gdal` package (or maybe the `opencv` package, if that causes problems) as root user from the base environment:
```bash
conda activate
conda remove gdal
conda deactivate
```
# conda activate
(base) # conda remove gdal
# conda deactivate
```
Then as normal user create a new environment where you install the required packages:
```bash
conda create -n imagefusion gdal rios python-fmask rasterio matplotlib pandas
conda activate imagefusion
```
$ conda create -n imagefusion gdal rios python-fmask rasterio matplotlib pandas
$ conda activate imagefusion
```
Alternatively, as a quickfix, prepend the `cmake` command with the GDAL search path, e. g.:
```
$ GDAL_DIR=/usr cmake ..
```bash
GDAL_DIR=/usr cmake ..
```
* *ModuleNotFoundError: No module named 'apt_pkg'* during cmake run
......@@ -169,24 +220,33 @@ After installation you might want to test the python installation (note, conda b
You probably cloned the repository with `git clone URL`, which does not fetch the submodules. After cloning you can fetch them with:
```bash
git submodule update --init --recursive
```
### MacOS
This should work similar to Linux. However, MacOS is lacking a package manager, so if you do not have it yet, install Homebrew now:
% /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
Then install the dependencies:
% brew install cmake opencv armadillo gdal boost libomp
```bash
brew install cmake opencv armadillo gdal boost libomp
```
Now go to the folder where this readme is located. Then do the following
% mkdir build
% cd build
% cmake ..
% make starfm estarfm staarch -j 4 # select the targets you want, imfupython might not work yet
```bash
mkdir build
cd build
cmake ..
make starfm estarfm staarch -j 4 # select the targets you want, imfupython might not work yet
```
which will build the library as `lib/libimagefusion.dylib` and the utilities as `bin/starfm`, `bin/estarfm` and `bin/staarch`.
The build on macOS is barely tested.
......@@ -195,16 +255,23 @@ The build on macOS is barely tested.
Multiple ways are possible. We suggest to use MSYS2, which basically allows to use a Linux way of installing. Assuming a 64 bit system, download MSYS2 64 bit from https://msys2.github.io/ and install it to the default folder (`C:\msys64`). First we will update pacman and install a compiler (MinGW). Start MSYS2 MSYS and execute:
$ pacman -Syu
```bash
pacman -Syu
```
Close MSYS2 and start it again. Execute:
$ pacman -Su
$ pacman -S mingw-w64-x86_64-gcc
```bash
pacman -Su
pacman -S mingw-w64-x86_64-gcc
```
Close MSYS2.
Next, we install all the dependencies. Start MSYS2 MSYS again and execute:
$ pacman -S mingw64/mingw-w64-x86_64-cmake \
```bash
pacman -S mingw64/mingw-w64-x86_64-cmake \
mingw64/mingw-w64-x86_64-extra-cmake-modules \
mingw64/mingw-w64-x86_64-make \
msys/make \
......@@ -217,6 +284,7 @@ Next, we install all the dependencies. Start MSYS2 MSYS again and execute:
mingw64/mingw-w64-x86_64-arpack \
mingw64/mingw-w64-x86_64-curl \
mingw64/mingw-w64-x86_64-proj
```
The above command will install cmake and all the other external dependent libraries. After the installation of the dependent libraries the imagefusion framework can be compiled and built.
Before we start compiling and building the imagefusion framework, the following paths must be added to the user variables as well as the environment variable `PATH`.
......@@ -228,7 +296,6 @@ User variables:
| GDAL_DATA | C:\msys64\mingw64\share\gdal
| GDAL_LIBRARY | C:\msys64\mingw64\bin
System Variables: Place these locations at the start of the values.
1. C:\msys64\mingw64\bin
......@@ -240,10 +307,12 @@ Once the above paths are added properly, the imagefusion framework is now ready
To compile the imagefusion framework open Windows command prompt and navigate to the imagefusion directory and execute the following commands,
> mkdir build
> cd build
> cmake -G “MSYS Makefiles” ..
> make all
```cmd
mkdir build
cd build
cmake -G “MSYS Makefiles” ..
make all
```
The execution of the above commands will compile and build the imagefusion framework. This will result in a bin directory which contains the `libimagefusion.dll` file along with other utilities as `.exe` files which are built using the imagefusion library.
This includes different data fusion utilities, cloud masking and interpolation utility, image crop and image compare utilities. The utilities and the `libimagefusion.dll` will be available in the `bin` folder. The utilities can be executed directly from command prompt using the appropriate options (`HELP`available with every utility).
......@@ -254,29 +323,32 @@ To compile the documentation following applications / tools / software are requi
1. *Dia*:
Tool to generate diagrams in the API documentation.
Available in (http://dia-installer.de/)
Available in http://dia-installer.de/
2. *Graphviz*:
Tool to generate graphs in the API documentation.
Available in (https://www.graphviz.org/Download_windows.php)
Available in https://www.graphviz.org/Download_windows.php
3. *Ghostscript*:
Tool to generate pdf in the API documentation.
Available in (https://www.ghostscript.com/download/gsdnld.html)
Available in https://www.ghostscript.com/download/gsdnld.html
4. *Doxygen*:
Tool to generate the API documentation.
Available in (https://www.doxygen.nl/download.html#srcbin)
Available in https://www.doxygen.nl/download.html#srcbin
5. *MikTeX*:
Tool to compile latex files.
Available in (https://miktex.org/download)
Available in https://miktex.org/download
After installing all the above mentioned applications / tools / software, their respective paths should be added to the environment variable `PATH`.
Now the documentation can be compiled by simply executing the following command in the windows command prompt with the current directory as build.
Now the documentation can be compiled by simply executing the following command in the *windows command prompt* with the current directory as build.
> make doc
```cmd
make doc
```
Executing this command will create a doc directory in the main imagefusion directory along with a `doc.html` file. Opening this file will take you to the main index page of the documentation.
#### Packaging
The cmake file of imagefusion framework also has commands to generate windows installer package for easy distribution of the imagefusion framework along with the implemented data fusion algorithms, remote sensing utilities, header files and documentation. This will enable the users to use the imagefusion framework without the need to install many tools and prepare an environment suitable for building and compiling the imagefusion framework.
In order to perform the packaging and generate a “.msi” Microsoft installer file, the following tool is required.
......@@ -292,9 +364,11 @@ System variable:
| -------- | -----------------------------------------
| WIX | C:\Program Files (x86)\WiX Toolset v3.10\
Now we are ready to make the imagefusion framework package. The package can be generated by executing the following command from the windows command prompt as `build` as the current directory.
Now we are ready to make the imagefusion framework package. The package can be generated by executing the following command from the *windows command prompt* as `build` as the current directory.
> make package
```cmd
dir make package
```
Executing the above command will result in the creation of `Imagefusion Framework-0.0.1-win64.msi`.
Now you have a Windows installer, which you can execute to install the library and the utilities. This also works on other computers, since it includes all runtime dependencies.
......@@ -305,17 +379,22 @@ The cmake file of the imagefusion framework is also provided with test configura
This configuration will prepare and build all the test methods associated with the imagefusion framework to test all the functionalities.
The test functions can be build by simply executing the following command.
> make testimagefusion
```cmd
make testimagefusion
```
This will build all the test fucntions into an exe file called `testimagefusion.exe`.
This will build all the test functions into an exe file called `testimagefusion.exe`.
> testimagefusion
```cmd
testimagefusion
```
Simply execute the `testimagefusion.exe` file to run all the test functions.
Note that at the time of writing the MSYS2 version of GDAL, which is used to read images, does not support images in the format HDF 4, but in HDF 5. This is bad, since MODIS images are in HDF 4 and it is incompatible to HDF 5. There is a utility to convert HDF 4 images to HDF 5 images ([doc](https://support.hdfgroup.org/products/hdf5_tools/h4toh5/) / [download](https://support.hdfgroup.org/ftp/HDF5/releases/tools/h4toh5/)), which might help.
#### Troubleshooting
(Only applicable if the imagefusion framework library and other utilities are build from source)
After compilation if the imagefusion framework does not compile or run properly. This might be due to some issues with the dependent libraries.
......@@ -331,25 +410,32 @@ If it is not available then the execution of the utility will throw an error sta
The below command shows how to search for a particular package and to install the package.
$ pacman -Ss arpack
```bash
pacman -Ss arpack
```
Executing the above command in MSYS2 MSYS will search for arpack package and show the results. The result will look like:
```output
mingw32/mingw-w64-i686-arpack 3.5.0-1
Fortran77 subroutines designed to solve large scale eigenvalue problems
(mingw-w64)
mingw64/mingw-w64-x86_64-arpack 3.5.0-1
Fortran77 subroutines designed to solve large scale eigenvalue problems
(mingw-w64)
```
Two versions of arpack packages are available. Based on the user preference, the user can install either of the versions by simply executing the following command.
```bash
# For 64 bit version
$ pacman -S mingw64/mingw-w64-x86_64-arpack
pacman -S mingw64/mingw-w64-x86_64-arpack
```
```bash
# For 32 bit version
$ pacman -S mingw32/mingw-w64-i686-arpack
pacman -S mingw32/mingw-w64-i686-arpack
```
The above instruction holds valid for any issues arising due to missing packages.
......@@ -360,12 +446,14 @@ If you like to contribute to the project, we recommend you the following IDEs
* QT Creator: Just open the `CMakeLists.txt` as project file. QT Creator can be installed on Windows with MSYS2 on linux from the repositories.
* Eclipse: Download from eclipse.org the package "Eclipse IDE for C/C++ Developers" and install or run. Then generate eclipse project files:
$ cd ..
$ mkdir imagefusion-eclipse
$ cd imagefusion-eclipse
$ cmake ../imagefusion -G"Eclipse CDT4 - Unix Makefiles"
These steps work similarly with the CMake-GUI in Windows. Now you have `.cproject` and `.project` files, which you can import with eclipse.
```bash
cd ..
mkdir imagefusion-eclipse
cd imagefusion-eclipse
cmake ../imagefusion -G"Eclipse CDT4 - Unix Makefiles"
```
These steps work similarly with the CMake-GUI in Windows. Now you have `.cproject` and `.project` files, which you can import with eclipse.
## License
......
# currently unused, since CMake 3.25 is required to compile a project. We wait for Ubuntu 24.04...
cmake_minimum_required(VERSION 3.12)
project(spdlogtest VERSION 0.0.1)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
find_package(spdlog)
add_executable(spdlogtest main.cpp)
target_link_libraries(spdlogtest PUBLIC "spdlog::spdlog")
#include <iostream>
struct MyClass {
int a = 0;
double b = 2.5;
};
std::ostream& operator<<(std::ostream& out, MyClass const& m) {
out << "a: " << m.a << ", b: " << m.b;
return out;
}
#include <spdlog/spdlog.h>
#include <spdlog/fmt/ostr.h> // required for custom types
#if !defined(SPDLOG_FMT_EXTERNAL) // || !SPDLOG_FMT_EXTERNAL
#include <spdlog/fmt/bundled/ranges.h> // for logging std::vector
#else // SPDLOG_FMT_EXTERNAL is defined - use external fmtlib
#include <fmt/ranges.h> // for logging std::vector
// register custom types, that have an ostream operator << defined
#include <fmt/ostream.h>
#if FMT_VERSION >= 90000
template <> struct fmt::formatter<MyClass> : ostream_formatter {};
#endif
#endif
int main() {
MyClass b;
//std::cout << b << std::endl;
SPDLOG_INFO("{}", b);
return 0;
}
This diff is collapsed.
This diff is collapsed.
......@@ -115,8 +115,8 @@ public:
* @param validMask is an optional mask (with Type::uint8) in the size of the input images to
* mark valid and invalid input data (e. g. nodata fill values). It should have 255s at valid
* locations and 0s at invalid locations. Any data fusor should use this mask, but check
* whether it fits to the source images, by using `mask.isMaskFor(src)`. If separate masks for
* the input images are meaningful and supported by a data fusor this mask parameter is the
* whether it fits to the source images, by using `validMask.isMaskFor(src)`. If separate masks
* for the input images are meaningful and supported by a data fusor this mask parameter is the
* mask for the low resolution image at the prediction date. Otherwise it should be considered
* as a common mask for all input images.
*
......@@ -126,7 +126,7 @@ public:
* If a location is marked for prediction, but is at the same time marked as invalid it may
* still be predicted, depending on the algorithm. So `predMask` specifies where to write,
* while `validMask` specifies where to read. However, many algorithms read a value, modify it
* and write it, so they need to be able to read a value for a prediction.
* and write it. In that case only valid locations can be predicted.
*
* Note, all other necessary settings, such as setting the source images and algorithm specific
* settings should be set *before* calling predict!
......
......@@ -2262,7 +2262,7 @@ public:
/**
* @brief Get a single-channel mask from value range(s) of valid values
*
* @param channelRanges is either a single range that will be used for all channels or one
* @param channelRange is either a single range that will be used for all channels or one
* range per channel. A range is just an #Interval, like [1, 100].
*
* @param useAnd determines whether to merge multiple masks with a bitwise *and* (logical
......@@ -2354,10 +2354,10 @@ public:
* Image bad_mask = img.createSingleChannelMaskFromRange(Interval::closed(nodata, nodata));
* @endcode
*
* @see createMultiChannelMaskFromRange(std::vector<Interval> const& channelRanges) const
* createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSets) const
* @see createMultiChannelMaskFromRange(std::vector<Interval> const& channelRange) const
* createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSet) const
*/
Image createSingleChannelMaskFromRange(std::vector<Interval> const& channelRanges, bool useAnd = true) const;
Image createSingleChannelMaskFromRange(std::vector<Interval> const& channelRange, bool useAnd = true) const;
/// \copydoc createSingleChannelMaskFromRange
Image createSingleChannelMaskFromRange(Interval const& channelRange, bool useAnd = true) const;
......@@ -2366,7 +2366,7 @@ public:
/**
* @brief Get a single-channel mask from value set(s) of valid values
*
* @param channelSets is either a single set that will be used for all channels or one set per
* @param channelSet is either a single set that will be used for all channels or one set per
* channel. A set (#IntervalSet) S is a union of Interval%s, like
* S = [1, 100] ∪ [200, 250] ∪ (300, 305).
*
......@@ -2467,9 +2467,9 @@ public:
* Image bad_mask = img.createSingleChannelMaskFromRange(Interval::closed(nodata, nodata));
* @endcode
*
* @see createMultiChannelMaskFromSet(std::vector<IntervalSet> const& channelSets) const
* @see createMultiChannelMaskFromSet(std::vector<IntervalSet> const& channelSet) const
*/
Image createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSets, bool useAnd = true) const;
Image createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSet, bool useAnd = true) const;
/// \copydoc createSingleChannelMaskFromSet
Image createSingleChannelMaskFromSet(IntervalSet const& channelSet, bool useAnd = true) const;
......@@ -2477,7 +2477,7 @@ public:
/**
* @brief Get a multi-channel mask from value range(s)
*
* @param channelRanges is either a single range that will be used for all channels or one
* @param channelRange is either a single range that will be used for all channels or one
* range per channel. A range is just an #Interval, like \f$ [1, 100] \f$.
*
* This method can only create a mask from contiguous intervals (open or closed). If you
......@@ -2542,9 +2542,9 @@ public:
* Image bad_mask = img.createSingleChannelMaskFromRange(Interval::closed(nodata, nodata));
* @endcode
*
* @see createSingleChannelMaskFromRange(std::vector<Interval> const& channelRanges) const
* @see createSingleChannelMaskFromRange(std::vector<Interval> const& channelRange) const
*/
Image createMultiChannelMaskFromRange(std::vector<Interval> const& channelRanges) const;
Image createMultiChannelMaskFromRange(std::vector<Interval> const& channelRange) const;
/// \copydoc createMultiChannelMaskFromRange
Image createMultiChannelMaskFromRange(Interval const& channelRange) const;
......@@ -2552,7 +2552,7 @@ public:
/**
* @brief Get a multi-channel mask from value set(s)
*
* @param channelSets is either a single set that will be used for all channels or one set per
* @param channelSet is either a single set that will be used for all channels or one set per
* channel. A set (#IntervalSet) S is a union of Interval%s, like S = [1, 100] ∪ [200, 250] ∪
* (300, 305).
*
......@@ -2622,9 +2622,9 @@ public:
* Image bad_mask = img.createSingleChannelMaskFromRange(Interval::closed(nodata, nodata));
* @endcode
*
* @see createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSets) const
* @see createSingleChannelMaskFromSet(std::vector<IntervalSet> const& channelSet) const
*/
Image createMultiChannelMaskFromSet(std::vector<IntervalSet> const& channelSets) const;
Image createMultiChannelMaskFromSet(std::vector<IntervalSet> const& channelSet) const;
/// \copydoc createMultiChannelMaskFromSet
Image createMultiChannelMaskFromSet(IntervalSet const& channelSet) const;
......
......@@ -48,6 +48,7 @@
#include "type.h"
#include "optionparser.h"
#include "fileformat.h"
#if FMT_VERSION >= 90000
template <> struct fmt::formatter<imagefusion::Size> : ostream_formatter {};
template <> struct fmt::formatter<imagefusion::Dimensions> : ostream_formatter {};
template <> struct fmt::formatter<imagefusion::Point> : ostream_formatter {};
......@@ -64,6 +65,7 @@
template <> struct fmt::formatter<imagefusion::FileFormat> : ostream_formatter {};
#endif
#endif
#define SPDLOG_SINGLE_TRACE(...) _Pragma("omp single nowait")\
SPDLOG_TRACE(__VA_ARGS__)
......
......@@ -174,10 +174,13 @@ public:
* @brief Predict the image with help of the underlying DataFusor%s
* @param date to predict
*
* @param mask is an optional mask in the size of the input images to mark invalid input data
* (e. g. fill values). Your predict method should check for a valid mask, like done in
* parallelizer_test.cpp.
* @param validMask is an optional mask in the size of the input images to mark (in)valid input
* data (e. g. fill values), see \ref DataFusor::predict. Your predict method should check for a
* valid mask, like done in parallelizer_test.cpp.
*
* @param predMask is an optional mask in the size of the input images to mark the locations
* that should be predicted. Depending on the algorithm, only valid locations might be
* predicted, see \ref DataFusor::predict.
*
* To predict the image at the specified date, the underlying DataFusor%s are called. But
* before this, the output image buffer is constructed in the size of the prediction area if
......
......@@ -109,7 +109,8 @@ public:
/**
* @brief Let the real DataFusor predict an image
* @param date to predict
* @param mask is an optional mask to mark invalid image data.
* @param validMask is an optional mask to mark (in)valid image data.
* @param predMask is an optional mask to mark locations to predict.
* @see DataFusor::predict(int date, ConstImage const& mask = ConstImage{})
*/
void predict(int date, ConstImage const& validMask = {}, ConstImage const& predMask = {}) override;
......
......@@ -683,9 +683,17 @@ public:
* See also \ref staarch_image_mask_structure "STAARCH image structure with optional masks".
*
* @param baseMask should either be empty or a single-channel mask with the size of the source
* images. Zero values prevent the usage of any image at these locations. Note, that for
* STAARCH there should be used separate masks for all images in the source image structure,
* images. Locations with zero values mark invalid locations for all source images. Note, that
* for STAARCH there should be used separate masks for all images in the source image structure,
* see #srcImages(). Then this mask here does not need to mask away clouds, etc.
*
* @param predMask should either be empty or a single-channel mask with the size of the source
* images. It specifies the locations that should be predicted (255) and the locations that
* should not be predicted (0). Since STAARCH is based on STARFM, like in STARFM a prediction
* can only be done for valid locations, as usually specified by the masks in the #srcImages()
* or for simple cases by the `baseMask`. The result of the @ref outputImage() is undefined at
* locations where no prediction occurs.
*
* STAARCH is mainly an algorithm to detect the date when disturbances occur for each pixel.
* However, this date can be used to decide in a STARFM prediction which of the two surrounding
......@@ -695,8 +703,8 @@ public:
*
* When calling predict the first time for a new date interval (high res references), the date
* of disturbance image has to be made. For that generateDOD() is called. However, you can also
* call it manually and output the DoD images if you are interested in that, see #dodImage.
* Then you don't have to use predict at all.
* call it manually and output the DoD images if you are interested in that, see #dodImage. Then
* you don't have to use predict at all.
*
* Before you can call predict, you have to set the source images with srcImages() and set the
* options with processOptions().
......@@ -707,11 +715,11 @@ public:
* @brief Generate the date of disturbance image
*
* @param baseMask should either be empty or an arbitrary mask in the size of the source
* images. It must be single-channel. Zero values prevent the usage of any image at these
* locations. The result at these locations is undefined. Note, that because for STAARCH there
* should be the separate masks for all images in the source image structure, see #srcImages(),
* the mask here can be a user region of interest and does not need to mask away clouds, etc.,
* like with other algorithms.
* images. It must be single-channel. Locations with zero values mark invalid locations for all
* source images. The result at these locations is undefined. Note, that because for STAARCH
* there should be the separate masks for all images in the source image structure, see
* #srcImages(), the mask here can be a user region of interest and does not need to mask away
* clouds, etc., like with other algorithms.
*
* This uses the high resolution images with their respective masks and finds the change mask.
* The change mask marks the locations that had a disturbance in the right high resolution
......@@ -757,7 +765,8 @@ protected:
/**
* @brief Check the input images size, number of channels, etc.
* @param mask will also be checked
* @param validMask will also be checked
* @param predMask will also be checked
*
* This method is called from predict().
*/
......