Skip to content
Snippets Groups Projects
Unverified Commit 6d86a84b authored by Austin Anderson's avatar Austin Anderson Committed by GitHub
Browse files

Add directory for bazelrcs (#31)

This is another experimental directory for containing bazelcs that will
replicate TensorFlow's CI setup. Right now, these two configs are very
basic, and I'm testing them out.
parent 86ca4607
No related branches found
No related tags found
No related merge requests found
......@@ -57,9 +57,12 @@ Want to add your own project to this list? It's easy: check out
* [**WSL2 GPU Guide**](wsl2_gpu_guide): Instructions for enabling GPU with Tensorflow
on a WSL2 virtual machine.
### WIP / Other
* [**(Experimental) Official Dockerfiles**](experimental_official_dockerfiles):
Rework of TensorFlow's Dockerfiles
* [**(Experimental) Official Bazelrcs**](experimental_official_bazelrcs):
Standard configurations for TensorFlow builds
* [**Directory Template**](directory_template): Example short description.
* [**Tekton CI**](tekton): perfinion's experimental directory for using Tekton
CI with TensorFlow
......
# [Experimental] Official BazelRCs
Standard configurations for TensorFlow builds
Maintainer: @angerson (TensorFlow, SIG Build)
* * *
WIP Bazelrcs for building and using TensorFlow and related packages.
This directory is intended to contain concrete `bazelrc` files for use with
TensorFlow's CI, and for users to replicate TensorFlow's CI.
# This bazelrc can build a CPU-supporting TF package.
# Hopefully it's compatible with manylinux2010.
# Use Python 3.X as installed in container image
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build --python_path="/usr/bin/python3"
# Build TensorFlow v2
build --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
# Prevent double-compilation of some TF code, ref. b/183279666 (internal)
# > TF's gen_api_init_files has a genrule to run the core TensorFlow code
# > on the host machine. If we don't have --distinct_host_configuration=false,
# > the core TensorFlow code will be built once for the host and once for the
# > target platform.
# See also https://docs.bazel.build/versions/master/guide.html#build-configurations-and-cross-compilation
build --distinct_host_configuration=false
# Target the AVX instruction set
build --copt=-mavx --host_copt=-mavx
# Use the NVCC toolchain to compile for manylinux2010
build --crosstool_top=@org_tensorflow//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda11.2:toolchain
# This bazelrc can build a GPU-supporting TF package.
# Hopefully it's compatible with manylinux2010.
# Use Python 3.X as installed in container image
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build --python_path="/usr/bin/python3"
# Build TensorFlow v2
build --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
# Prevent double-compilation of some TF code, ref. b/183279666 (internal)
# > TF's gen_api_init_files has a genrule to run the core TensorFlow code
# > on the host machine. If we don't have --distinct_host_configuration=false,
# > the core TensorFlow code will be built once for the host and once for the
# > target platform.
# See also https://docs.bazel.build/versions/master/guide.html#build-configurations-and-cross-compilation
build --distinct_host_configuration=false
# Target the AVX instruction set
build --copt=-mavx --host_copt=-mavx
# CUDA: Set up compilation CUDA version and paths
build --@local_config_cuda//:enable_cuda
build --repo_env TF_NEED_CUDA=1
build --action_env=TF_CUDA_VERSION="11"
build --action_env=TF_CUDNN_VERSION="8"
build --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
build --action_env=GCC_HOST_COMPILER_PATH="/usr/bin/gcc-5"
build --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
build --crosstool_top=@org_tensorflow//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda11.2:toolchain
# CUDA: Enable TensorRT optimizations
# https://developer.nvidia.com/tensorrt
build --repo_env TF_NEED_TENSORRT=1
# CUDA: Select supported compute capabilities (supported graphics cards).
# See https://developer.nvidia.com/cuda-gpus#compute
# TODO: What does sm_ vs compute_ mean?
# TODO: How can users select a good value for this?
build --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
# This bazelrc can build a GPU-supporting TF package compatible with
# manylinux2010.
#
# It includes some test query settings ripped from TensorFlow's testing setup,
# but I'm not exactly sure which CI builds they match. Note that we have a
# no_oss_py38 tag, which implies there are other tags for disabling tests with
# specific python versions. Since tag_filters flags aren't accumulative,
# this means using shared bazelrc files could become difficult.
# Python settings
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build --python_path="/usr/bin/python3"
# Build Env
build --action_env ABI_VERSION="gcc"
build --action_env ABI_LIBC_VERSION="glibc_2.19"
build --action_env BAZEL_COMPILER="/dt7/usr/bin/gcc"
build --action_env BAZEL_HOST_SYSTEM="i686-unknown-linux-gnu"
build --action_env BAZEL_TARGET_LIBC="glibc_2.19"
build --action_env BAZEL_TARGET_CPU="k8"
build --action_env BAZEL_TARGET_SYSTEM="x86_64-unknown-linux-gnu"
build --action_env CC_TOOLCHAIN_NAME="linux_gnu_x86"
build --action_env CC="/dt7/usr/bin/gcc"
build --action_env CLEAR_CACHE=1
build --action_env HOST_CXX_COMPILER="/dt7/usr/bin/gcc"
build --action_env HOST_C_COMPILER="/dt7/usr/bin/gcc"
# This build
build --config=release_gpu_linux
build --config=xla
build --config=tensorrt
build --config=cuda
# Standard
build:opt --copt=-mavx
build:opt --host_copt=-mavx
build:opt --copt=-march=native
build:opt --host_copt=-march=native
build:opt --define with_default_optimizations=true
# Testing
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test --test_env=LD_LIBRARY_PATH
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-no_gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-no_gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-no_gpu,-oss_serial,-v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-no_gpu,-v1only
build --action_env TF_CONFIGURE_IOS="0"
build --crosstool_top=//third_party/toolchains/preconfig/ubuntu16.04/gcc7_manylinux2010-nvcc-cuda11.2:toolchain
build --linkopt=-lrt
test --action_env=TF2_BEHAVIOR=1
test --test_lang_filters=py
test --build_tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38,-no_cuda11"
test --test_tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_serial,-no_oss_py38,-no_cuda11"
test --test_timeout="300,450,1200,3600" --local_test_jobs=4
test --test_output=errors --verbose_failures=true --keep_going
test --run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute
# These are the way that a similar bazelrc as this was invoked, but I can't
# remember where from.
#
# test -- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
# test //tensorflow/... \
# -//tensorflow/python/integration_testing/... \
# -//tensorflow/compiler/tf2tensorrt/... \
# -//tensorflow/compiler/xrt/... \
# -//tensorflow/lite/micro/examples/... \
# -//tensorflow/core/tpu/..."
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment