Skip to content
Snippets Groups Projects
Unverified Commit 807ce87e authored by Nitin Srinivasan's avatar Nitin Srinivasan Committed by GitHub
Browse files

change build enviroment to be manylinux2014 compatible (#57)

Set up symlink for devtoolset-8

Combine Docker GCR presubmits and also push main to gcr

Commit missed files

Log in to GCR

Fix conditional, hopefully

Clarify

Add Python 3.10 support (#58)

Adds Python 3.10 support to the containers. Python 3.10 changes some library behavior and, for now, needs an alternative installation method to work.

Upgrade gcrpio for fast build and cleanup setup

Add utilities for running release tests (#56)

This adds the dependencies and notably bazelrc config options to run TensorFlow's Nightly and Release tests, which I've been working on replicating on internal CI. I still have documentation and migration work to do, but the major portion of the support work is here.

add gdb to the system packages

change to gcc 8.3.1 from centos7 for devtoolset8

fix libstdc++ symlink in devtoolset-8 environment

Undo ignoring other xml files

Update README

Deduplicate repeated messages

Squash long runfiles paths

Lock nvidia driver to 460

libtensorflow work

Fix libtensorflow script and start prelim check

Update Test Requirements to have same versions as  tf_sig_build_dockerfiles/devel.requirements.txt (#65)

* Add additional gitignore files

* Update requirements with same versions

Keep versions consistent with  tf_sig_build_dockerfiles/devel.requirements.txt

Cleanup

Fix Build issue from `python_include` (#67)

* Remove Python 3.10 pip special handling

* Link usr/include to usr/local/include

* Update location of python include

* Update setup.python.sh

Assorted changes -- see details

- Remove installation of nvidia-profiler, which depends on libcuda1,
which ultimately installs an nvidia driver package, which we don't want
because we're running in docker, in which the drivers are mounted. I
hope nvidia-profiler isn't necessary for anything important; otherwise
we'll need to synchronize driver versions between the containers and VM
images.
- Add less, colordiff and a newer version of clang-format
- Add code_check_changed_files, which is intended to replace the
"incremental" parts of ci_sanity. Still a work in progress because we
need to decide on valuable configurations (clang-format and pylint
cannot be run the same way as we have them configured internally and
currently have a lot of findings)
- Add code_check_full, which is intended to replace the "across entire
code base" parts of ci_sanity. I rewrote many of the clunkier tests.
Still a work in progress because we must verify that the changed tests
will still fail.
- Fix bad "bazel test " expansion for libtensorflow
- Fix bad chmod for libtensoflow repacker

Change libtensorflow config values to fix target selection

Fix a typo in venv installation

(Thanks to reedwm)

Remove extra lines

(Thanks again to reedwm)

Clarify ctrl-s warning

Correctly remove extra test filters

Make it possible to run isolated pip tests

More work on code checks

Fix a typo

Clean up code check full

Remove clang-format

Cleanup changed_files and move one to full

Add a missing test

Clean up and fix code_check_full

Update docs and create experimental RBE configs

Update docs and create experimental RBE configs

Update dependencies to 2.9.0.dev

Update Go API installation guide for TensorFlow 2.8.0 (#74)

Clarify usage of nightly commit

Fix mistaken 'test' command

Update docs and create experimental RBE configs

Update docs and create experimental RBE configs

Update dependencies to 2.9.0.dev

Update Go API installation guide for TensorFlow 2.8.0 (#74)

Clarify usage of nightly commit

Fix mistaken 'test' command

change to devtoolset-9 and gcc 9.3.1 for manylinux2014

change cachebuster value for ml2014 remote cache

change to new libstdcxx abi for devtoolset-9

change cachebuster value to use the new libstdcxx abi

link against nonshared44 in devtoolset-9

update the cachebuster value

change CACHEBUSTER value for gpu builds

remove redudant commands during build environment setup

change cachbuster variable name for gpu builds

store manylinux2014 cache in a different location

amend comment for accuracy
parent d82c4bb4
Branches
No related tags found
No related merge requests found
...@@ -7,18 +7,23 @@ COPY setup.packages.sh setup.packages.sh ...@@ -7,18 +7,23 @@ COPY setup.packages.sh setup.packages.sh
COPY builder.packages.txt builder.packages.txt COPY builder.packages.txt builder.packages.txt
RUN /setup.packages.sh /builder.packages.txt RUN /setup.packages.sh /builder.packages.txt
# Install devtoolset-7 in /dt7 with gclibc 2.12 and libstdc++ 4.4, for building # Install devtoolset-7 in /dt7 with glibc 2.12 and libstdc++ 4.4, for building
# manylinux2010-compatible packages. Scripts expect to be in the root directory. # manylinux2010-compatible packages. Scripts expect to be in the root directory.
COPY builder.devtoolset/fixlinks.sh /fixlinks.sh COPY builder.devtoolset/fixlinks.sh /fixlinks.sh
COPY builder.devtoolset/rpm-patch.sh /rpm-patch.sh COPY builder.devtoolset/rpm-patch.sh /rpm-patch.sh
COPY builder.devtoolset/build_devtoolset.sh /build_devtoolset.sh COPY builder.devtoolset/build_devtoolset.sh /build_devtoolset.sh
RUN /build_devtoolset.sh devtoolset-7 /dt7 RUN /build_devtoolset.sh devtoolset-7 /dt7
# Install devtoolset-9 in /dt9 with glibc 2.17 and libstdc++ 4.8, for building
# manylinux2014-compatible packages.
RUN /build_devtoolset.sh devtoolset-9 /dt9
################################################################################ ################################################################################
FROM nvidia/cuda:11.2.2-base-ubuntu20.04 as devel FROM nvidia/cuda:11.2.2-base-ubuntu20.04 as devel
################################################################################ ################################################################################
COPY --from=builder /dt7 /dt7 COPY --from=builder /dt7 /dt7
COPY --from=builder /dt9 /dt9
# Install required development packages but delete unneeded CUDA bloat # Install required development packages but delete unneeded CUDA bloat
# CUDA must be cleaned up in the same command to prevent Docker layer bloating # CUDA must be cleaned up in the same command to prevent Docker layer bloating
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# ============================================================================== # ==============================================================================
# #
# Builds a devtoolset cross-compiler targeting manylinux 2010 (glibc 2.12 / # Builds a devtoolset cross-compiler targeting manylinux 2010 (glibc 2.12 /
# libstdc++ 4.4). # libstdc++ 4.4) or manylinux2014 (glibc 2.17 / libstdc++ 4.8).
VERSION="$1" VERSION="$1"
TARGET="$2" TARGET="$2"
...@@ -23,26 +23,52 @@ TARGET="$2" ...@@ -23,26 +23,52 @@ TARGET="$2"
case "${VERSION}" in case "${VERSION}" in
devtoolset-7) devtoolset-7)
LIBSTDCXX_VERSION="6.0.24" LIBSTDCXX_VERSION="6.0.24"
LIBSTDCXX_ABI="gcc4-compatible"
;; ;;
devtoolset-8) devtoolset-9)
LIBSTDCXX_VERSION="6.0.25" LIBSTDCXX_VERSION="6.0.28"
LIBSTDCXX_ABI="new"
;; ;;
*) *)
echo "Usage: $0 {devtoolset-7|devtoolset-8} <target-directory>" echo "Usage: $0 {devtoolset-7|devtoolset-9} <target-directory>"
echo "Use 'devtoolset-7' to build a manylinux2010 compatible toolchain or 'devtoolset-9' to build a manylinux2014 compatible toolchain"
exit 1 exit 1
;; ;;
esac esac
mkdir -p "${TARGET}" mkdir -p "${TARGET}"
# Download binary glibc 2.12 release.
# Download glibc's shared and development libraries based on the value of the
# `VERSION` parameter.
# Note: 'Templatizing' this and the other conditional branches would require
# defining several variables (version, os, path) making it difficult to maintain
# and extend for future modifications.
case "${VERSION}" in
devtoolset-7)
# Download binary glibc 2.12 shared library release.
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.12.1-0ubuntu6_amd64.deb" && \ wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.12.1-0ubuntu6_amd64.deb" && \
unar "libc6_2.12.1-0ubuntu6_amd64.deb" && \ unar "libc6_2.12.1-0ubuntu6_amd64.deb" && \
tar -C "${TARGET}" -xvzf "libc6_2.12.1-0ubuntu6_amd64/data.tar.gz" && \ tar -C "${TARGET}" -xvzf "libc6_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
rm -rf "libc6_2.12.1-0ubuntu6_amd64.deb" "libc6_2.12.1-0ubuntu6_amd64" rm -rf "libc6_2.12.1-0ubuntu6_amd64.deb" "libc6_2.12.1-0ubuntu6_amd64"
# Download binary glibc 2.12 development library release.
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \ wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
unar "libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \ unar "libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
tar -C "${TARGET}" -xvzf "libc6-dev_2.12.1-0ubuntu6_amd64/data.tar.gz" && \ tar -C "${TARGET}" -xvzf "libc6-dev_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
rm -rf "libc6-dev_2.12.1-0ubuntu6_amd64.deb" "libc6-dev_2.12.1-0ubuntu6_amd64" rm -rf "libc6-dev_2.12.1-0ubuntu6_amd64.deb" "libc6-dev_2.12.1-0ubuntu6_amd64"
;;
devtoolset-9)
# Download binary glibc 2.17 shared library release.
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.17-0ubuntu5.1_amd64.deb" && \
unar "libc6_2.17-0ubuntu5.1_amd64.deb" && \
tar -C "${TARGET}" -xvzf "libc6_2.17-0ubuntu5.1_amd64/data.tar.gz" && \
rm -rf "libc6_2.17-0ubuntu5.1_amd64.deb" "libc6_2.17-0ubuntu5.1_amd64"
# Download binary glibc 2.17 development library release.
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.17-0ubuntu5.1_amd64.deb" && \
unar "libc6-dev_2.17-0ubuntu5.1_amd64.deb" && \
tar -C "${TARGET}" -xvzf "libc6-dev_2.17-0ubuntu5.1_amd64/data.tar.gz" && \
rm -rf "libc6-dev_2.17-0ubuntu5.1_amd64.deb" "libc6-dev_2.17-0ubuntu5.1_amd64"
;;
esac
# Put the current kernel headers from ubuntu in place. # Put the current kernel headers from ubuntu in place.
ln -s "/usr/include/linux" "/${TARGET}/usr/include/linux" ln -s "/usr/include/linux" "/${TARGET}/usr/include/linux"
...@@ -56,6 +82,10 @@ ln -s "/usr/include/x86_64-linux-gnu/asm" "/${TARGET}/usr/include/asm" ...@@ -56,6 +82,10 @@ ln -s "/usr/include/x86_64-linux-gnu/asm" "/${TARGET}/usr/include/asm"
# Patch to allow non-glibc 2.12 compatible builds to work. # Patch to allow non-glibc 2.12 compatible builds to work.
sed -i '54i#define TCP_USER_TIMEOUT 18' "/${TARGET}/usr/include/netinet/tcp.h" sed -i '54i#define TCP_USER_TIMEOUT 18' "/${TARGET}/usr/include/netinet/tcp.h"
# Download specific version of libstdc++ shared library based on the value of
# the `VERSION` parameter
case "${VERSION}" in
devtoolset-7)
# Download binary libstdc++ 4.4 release we are going to link against. # Download binary libstdc++ 4.4 release we are going to link against.
# We only need the shared library, as we're going to develop against the # We only need the shared library, as we're going to develop against the
# libstdc++ provided by devtoolset. # libstdc++ provided by devtoolset.
...@@ -63,22 +93,30 @@ wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.4/libstdc++6_4.4.3 ...@@ -63,22 +93,30 @@ wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.4/libstdc++6_4.4.3
unar "libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \ unar "libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \
tar -C "/${TARGET}" -xvzf "libstdc++6_4.4.3-4ubuntu5_amd64/data.tar.gz" "./usr/lib/libstdc++.so.6.0.13" && \ tar -C "/${TARGET}" -xvzf "libstdc++6_4.4.3-4ubuntu5_amd64/data.tar.gz" "./usr/lib/libstdc++.so.6.0.13" && \
rm -rf "libstdc++6_4.4.3-4ubuntu5_amd64.deb" "libstdc++6_4.4.3-4ubuntu5_amd64" rm -rf "libstdc++6_4.4.3-4ubuntu5_amd64.deb" "libstdc++6_4.4.3-4ubuntu5_amd64"
;;
devtoolset-9)
# Download binary libstdc++ 4.8 shared library release
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.8/libstdc++6_4.8.1-10ubuntu8_amd64.deb" && \
unar "libstdc++6_4.8.1-10ubuntu8_amd64.deb" && \
tar -C "/${TARGET}" -xvzf "libstdc++6_4.8.1-10ubuntu8_amd64/data.tar.gz" "./usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.18" && \
rm -rf "libstdc++6_4.8.1-10ubuntu8_amd64.deb" "libstdc++6_4.8.1-10ubuntu8_amd64"
;;
esac
mkdir -p "${TARGET}-src" mkdir -p "${TARGET}-src"
cd "${TARGET}-src" cd "${TARGET}-src"
# Build a devtoolset cross-compiler based on our glibc 2.12 sysroot setup. # Build a devtoolset cross-compiler based on our glibc 2.12/glibc 2.17 sysroot setup.
case "${VERSION}" in case "${VERSION}" in
devtoolset-7) devtoolset-7)
wget "http://vault.centos.org/centos/6/sclo/Source/rh/devtoolset-7/devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm" wget "http://vault.centos.org/centos/6/sclo/Source/rh/devtoolset-7/devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm"
rpm2cpio "devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm" |cpio -idmv rpm2cpio "devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm" |cpio -idmv
tar -xvjf "gcc-7.3.1-20180303.tar.bz2" --strip 1 tar -xvjf "gcc-7.3.1-20180303.tar.bz2" --strip 1
;; ;;
devtoolset-8) devtoolset-9)
wget "http://vault.centos.org/centos/6/sclo/Source/rh/devtoolset-8/devtoolset-8-gcc-8.2.1-3.el6.src.rpm" wget "https://vault.centos.org/centos/7/sclo/Source/rh/devtoolset-9-gcc-9.3.1-2.2.el7.src.rpm"
rpm2cpio "devtoolset-8-gcc-8.2.1-3.el6.src.rpm" |cpio -idmv rpm2cpio "devtoolset-9-gcc-9.3.1-2.2.el7.src.rpm" |cpio -idmv
tar -xvf "gcc-8.2.1-20180905.tar.xz" --strip 1 tar -xvf "gcc-9.3.1-20200408.tar.xz" --strip 1
;; ;;
esac esac
...@@ -109,7 +147,7 @@ cd "${TARGET}-build" ...@@ -109,7 +147,7 @@ cd "${TARGET}-build"
--enable-plugin \ --enable-plugin \
--enable-shared \ --enable-shared \
--enable-threads=posix \ --enable-threads=posix \
--with-default-libstdcxx-abi="gcc4-compatible" \ --with-default-libstdcxx-abi=${LIBSTDCXX_ABI} \
--with-gcc-major-version-only \ --with-gcc-major-version-only \
--with-linker-hash-style="gnu" \ --with-linker-hash-style="gnu" \
--with-tune="generic" \ --with-tune="generic" \
...@@ -117,14 +155,29 @@ cd "${TARGET}-build" ...@@ -117,14 +155,29 @@ cd "${TARGET}-build"
make -j 42 && \ make -j 42 && \
make install make install
# Create the devtoolset libstdc++ linkerscript that links dynamically against # Create the devtoolset libstdc++ linkerscript that links dynamically against
# the system libstdc++ 4.4 and provides all other symbols statically. # the system libstdc++ 4.4 and provides all other symbols statically.
case "${VERSION}" in
devtoolset-7)
mv "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}" \ mv "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}" \
"/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}.backup" "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}.backup"
echo -e "OUTPUT_FORMAT(elf64-x86-64)\nINPUT ( libstdc++.so.6.0.13 -lstdc++_nonshared44 )" \ echo -e "OUTPUT_FORMAT(elf64-x86-64)\nINPUT ( libstdc++.so.6.0.13 -lstdc++_nonshared44 )" \
> "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}" > "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}"
cp "./x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++_nonshared44.a" \ cp "./x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++_nonshared44.a" \
"/${TARGET}/usr/lib" "/${TARGET}/usr/lib"
;;
devtoolset-9)
# Note that the installation path for libstdc++ here is /${TARGET}/usr/lib64/
mv "/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}" \
"/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}.backup"
echo -e "OUTPUT_FORMAT(elf64-x86-64)\nINPUT ( libstdc++.so.6.0.18 -lstdc++_nonshared44 )" \
> "/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}"
cp "./x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++_nonshared44.a" \
"/${TARGET}/usr/lib64"
;;
esac
# Link in architecture specific includes from the system; note that we cannot # Link in architecture specific includes from the system; note that we cannot
# link in the whole x86_64-linux-gnu folder, as otherwise we're overlaying # link in the whole x86_64-linux-gnu folder, as otherwise we're overlaying
...@@ -136,4 +189,3 @@ PYTHON_VERSIONS=("python3.7m" "python3.8" "python3.9" "python3.10") ...@@ -136,4 +189,3 @@ PYTHON_VERSIONS=("python3.7m" "python3.8" "python3.9" "python3.10")
for v in "${PYTHON_VERSIONS[@]}"; do for v in "${PYTHON_VERSIONS[@]}"; do
ln -s "/usr/local/include/${v}" "/${TARGET}/usr/include/x86_64-linux-gnu/${v}" ln -s "/usr/local/include/${v}" "/${TARGET}/usr/include/x86_64-linux-gnu/${v}"
done done
...@@ -30,6 +30,7 @@ clang-format-12 ...@@ -30,6 +30,7 @@ clang-format-12
colordiff colordiff
curl curl
ffmpeg ffmpeg
gdb
git git
jq jq
less less
......
...@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache ...@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache
# Use the public-access TF DevInfra cache (read only) # Use the public-access TF DevInfra cache (read only)
build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false
# Write to the TF DevInfra cache (only works for internal TF CI) # Write to the TF DevInfra cache (only works for internal TF CI)
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --google_default_credentials build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache/manylinux2014" --google_default_credentials
# Change the value of CACHEBUSTER when upgrading the toolchain, or when testing # Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
# different compilation methods. E.g. for a PR to test a new CUDA version, set # different compilation methods. E.g. for a PR to test a new CUDA version, set
# the CACHEBUSTER to the PR number. # the CACHEBUSTER to the PR number.
build --action_env=CACHEBUSTER=r2.9 build --action_env=CACHEBUSTER=r2.9_pr57
# Use Python 3.X as installed in container image # Use Python 3.X as installed in container image
build --action_env PYTHON_BIN_PATH="/usr/bin/python3" build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
...@@ -42,7 +42,7 @@ build --build_event_text_file=/tf/pkg/bep.txt ...@@ -42,7 +42,7 @@ build --build_event_text_file=/tf/pkg/bep.txt
build --build_event_binary_file=/tf/pkg/bep.pb build --build_event_binary_file=/tf/pkg/bep.pb
# Use the NVCC toolchain to compile for manylinux2010 # Use the NVCC toolchain to compile for manylinux2010
build --crosstool_top=@sigbuild-r2.9_config_cuda//crosstool:toolchain build --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
# Test-related settings below this point. # Test-related settings below this point.
test --build_tests_only --keep_going --test_output=errors --verbose_failures=true test --build_tests_only --keep_going --test_output=errors --verbose_failures=true
......
...@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache ...@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache
# Use the public-access TF DevInfra cache (read only) # Use the public-access TF DevInfra cache (read only)
build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false
# Write to the TF DevInfra cache (only works for internal TF CI) # Write to the TF DevInfra cache (only works for internal TF CI)
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --google_default_credentials build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache/manylinux2014" --google_default_credentials
# Change the value of CACHEBUSTER when upgrading the toolchain, or when testing # Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
# different compilation methods. E.g. for a PR to test a new CUDA version, set # different compilation methods. E.g. for a PR to test a new CUDA version, set
# the CACHEBUSTER to the PR number. # the CACHEBUSTER to the PR number.
build --action_env=CACHEBUSTER=r2.9 build --action_env=CACHEBUSTER=r2.9_pr57
# Use Python 3.X as installed in container image # Use Python 3.X as installed in container image
build --action_env PYTHON_BIN_PATH="/usr/bin/python3" build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
...@@ -47,9 +47,9 @@ build --repo_env TF_NEED_CUDA=1 ...@@ -47,9 +47,9 @@ build --repo_env TF_NEED_CUDA=1
build --action_env=TF_CUDA_VERSION="11" build --action_env=TF_CUDA_VERSION="11"
build --action_env=TF_CUDNN_VERSION="8" build --action_env=TF_CUDNN_VERSION="8"
build --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2" build --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
build --action_env=GCC_HOST_COMPILER_PATH="/dt7/usr/bin/gcc" build --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc"
build --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib" build --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
build --crosstool_top=@sigbuild-r2.9_config_cuda//crosstool:toolchain build --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
# CUDA: Enable TensorRT optimizations # CUDA: Enable TensorRT optimizations
# https://developer.nvidia.com/tensorrt # https://developer.nvidia.com/tensorrt
......
...@@ -6,7 +6,7 @@ set -euxo pipefail ...@@ -6,7 +6,7 @@ set -euxo pipefail
for wheel in /tf/pkg/*.whl; do for wheel in /tf/pkg/*.whl; do
echo "Checking and renaming $wheel..." echo "Checking and renaming $wheel..."
time python3 -m auditwheel repair --plat manylinux2010_x86_64 "$wheel" --wheel-dir /tf/pkg 2>&1 | tee check.txt time python3 -m auditwheel repair --plat manylinux2014_x86_64 "$wheel" --wheel-dir /tf/pkg 2>&1 | tee check.txt
# We don't need the original wheel if it was renamed # We don't need the original wheel if it was renamed
new_wheel=$(grep --extended-regexp --only-matching '/tf/pkg/\S+.whl' check.txt) new_wheel=$(grep --extended-regexp --only-matching '/tf/pkg/\S+.whl' check.txt)
......
...@@ -12,9 +12,9 @@ teardown_file() { ...@@ -12,9 +12,9 @@ teardown_file() {
rm -rf /tf/venv rm -rf /tf/venv
} }
@test "Wheel is manylinux2010 (manylinux_2_12) compliant" { @test "Wheel is manylinux2014 (manylinux_2_17) compliant" {
python3 -m auditwheel show "$TF_WHEEL" > audit.txt python3 -m auditwheel show "$TF_WHEEL" > audit.txt
grep --quiet 'This constrains the platform tag to "manylinux_2_12_x86_64"' audit.txt grep --quiet 'This constrains the platform tag to "manylinux_2_17_x86_64"' audit.txt
} }
@test "Wheel conforms to upstream size limitations" { @test "Wheel conforms to upstream size limitations" {
......
...@@ -21,8 +21,12 @@ EOF ...@@ -21,8 +21,12 @@ EOF
# for any Python version present # for any Python version present
pushd /usr/include/x86_64-linux-gnu pushd /usr/include/x86_64-linux-gnu
for f in $(ls | grep python); do for f in $(ls | grep python); do
# set up symlink for devtoolset-7
rm -f /dt7/usr/include/x86_64-linux-gnu/$f rm -f /dt7/usr/include/x86_64-linux-gnu/$f
ln -s /usr/include/x86_64-linux-gnu/$f /dt7/usr/include/x86_64-linux-gnu/$f ln -s /usr/include/x86_64-linux-gnu/$f /dt7/usr/include/x86_64-linux-gnu/$f
# set up symlink for devtoolset-9
rm -f /dt9/usr/include/x86_64-linux-gnu/$f
ln -s /usr/include/x86_64-linux-gnu/$f /dt9/usr/include/x86_64-linux-gnu/$f
done done
popd popd
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment