-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix compilation with GCC and Clang #587
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general. I guess we need to have a validation to check if the changes still work fine with older compiler toolchains. cc @xuhancn to review and stamp on it.
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-ignored-attributes") | ||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-pessimizing-move") | ||
|
||
if (CMAKE_CXX_COMPILER_ID STREQUAL "Clang") | ||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-vla-cxx-extension") | ||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-builtins") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you comment on the build errors you found without these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clang-only, for -Wno-vla-cxx-extension
, tons of warnings in all places:
/src/intel-extension-for-pytorch/csrc/cpu/aten/kernels/MergedEmbeddingBagKrnl.cpp:880:33: warning: variable length arrays in C++ are a Clang extension [-Wvla-cxx-extension]
880 | scalar_t* val_ptr[world_size];
| ^~~~~~~~~~
/usr/include/ATen/Dispatch.h:797:46: note: expanded from macro 'AT_DISPATCH_INDEX_TYPES'
797 | at::ScalarType::Long, index_t, __VA_ARGS__))
| ^~~~~~~~~~~
/usr/include/ATen/Dispatch.h:70:12: note: expanded from macro 'AT_PRIVATE_CASE_TYPE_USING_HINT'
70 | return __VA_ARGS__(); \
| ^~~~~~~~~~~
/usr/include/ATen/Dispatch.h:221:7: note: expanded from macro 'AT_DISPATCH_SWITCH'
221 | __VA_ARGS__ \
| ^~~~~~~~~~~
note: (skipping 2 expansions in backtrace; use -fmacro-backtrace-limit=0 to see all)
Clang-only, for -Wno-deprecated-builtins
all related to ROBIN_HOOD_IS_TRIVIALLY_COPYABLE
(few warnings)
/src/intel-extension-for-pytorch/csrc/cpu/utils/robin_hood.h:1545:29: warning: builtin __has_trivial_copy is deprecated; use __is_trivially_copyable instead [-Wdeprecated-builtins]
1545 | Cloner<Table, IsFlat && ROBIN_HOOD_IS_TRIVIALLY_COPYABLE(Node)>()(o, *this);
| ^
/src/intel-extension-for-pytorch/csrc/cpu/utils/robin_hood.h:218:47: note: expanded from macro 'ROBIN_HOOD_IS_TRIVIALLY_COPYABLE'
218 | #define ROBIN_HOOD_IS_TRIVIALLY_COPYABLE(...) __has_trivial_copy(__VA_ARGS__)
Clang-only, for -Wno-pessimizing-move
, repeated many times for inner_product/deconv/matmul/tensor.hpp/etc.
/src/intel-extension-for-pytorch/third_party/ideep/include/ideep/operators/matmul.hpp:1028:23: warning: moving a temporary object prevents copy elision [-Wpessimizing-move]
1028 | param.primitive = std::move(super(param.pd));
| ^
/src/intel-extension-for-pytorch/third_party/ideep/include/ideep/operators/matmul.hpp:1028:23: note: remove std::move call here
1028 | param.primitive = std::move(super(param.pd));
| ^~~~~~~~~~ ~
GCC-only, for -Wno-ignored-attributes
, tons of warnings in all places:
/src/intel-extension-for-pytorch/csrc/cpu/tpp/woq/vec.h:43:21: warning: ignoring attributes on template argument ‘__m512’ [-Wignored-attributes]
43 | struct VecOps<__m512> {
| ^
/src/intel-extension-for-pytorch/csrc/cpu/tpp/woq/vec.h:228:33: warning: ignoring attributes on template argument ‘__m512’ [-Wignored-attributes]
228 | inline std::tuple<__m512, __m512> _vec_load_bfloat16_as_two_floats(
| ^
/src/intel-extension-for-pytorch/csrc/cpu/tpp/woq/vec.h:228:33: warning: ignoring attributes on template argument ‘__m512’ [-Wignored-attributes]
/src/intel-extension-for-pytorch/csrc/cpu/tpp/woq/vec.h:284:34: warning: ignoring attributes on template argument ‘VecArray<N, c10::BFloat16>::vec_type’ {aka ‘__m512’} [-Wignored-attributes]
284 | using vec_ops = VecOps<vec_type>;
| ^
/src/intel-extension-for-pytorch/csrc/cpu/tpp/woq/vec.h:286:74: warning: ignoring attributes on template argument ‘VecType<float>::type’ {aka ‘__m512’} [-Wignored-attributes]
286 | using type = typename std::array<typename VecType<float>::type, num_vec>;
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I guess we should fix most of these warnings instead of just ignoring them. @xuhancn can you take a look?
@jingxu10 will help on land this PR. |
Hi @AngryLoki there are several flake8 format issues. Could you follow the steps below to run the flake8 check at your side?
|
@jingxu10 Python 3.11.8 and flake8==3.8.2 (current version) has no complains Details
Moreover, I don't change Python files in this this pull-request, so it should not break anything. |
We have a clangformat check which failed in our internal CI system. |
Hi, we have enabled a CI to run format verification. When you rebase your pr, the CI task should be triggered automatically. |
Compilation failed. |
@jingxu10 , thank you for checking! #include <immintrin.h>
// note: clang 13 and gcc 11.4 don't support -mavx512fp16 at all
// works perfectly, but only since gcc 13.1 and clang 14.
// gcc 12.4 error: unable to find numeric literal operator 'operator""f16'
auto vec_zero1 = _mm512_set1_ph(1.0f16);
// gcc >= 13.1 warning: converting to '_Float16' from 'float' with greater conversion rank
// -Wno-narrowing removes warning, but it is not the best idea
auto vec_zero2 = _mm512_set1_ph(1.0f);
// same as above: produces warning with gcc
auto vec_zero3 = _mm512_set1_ph((_Float16)1.0f);
// works everywhere, no warnings!
// (clang 13 and gcc 11.4 don't support -mavx512fp16)
auto vec_zero4 = _mm512_set1_ph(static_cast<_Float16>(1.0f)); (same in godbolt conformance view: https://godbolt.org/z/MdGbT7beT) I've updated my pull-request with |
@DiweiSun , CI fails with
Could you update it? Thanks -pip install lintrunner pip install lintrunner-adapters
+pip install lintrunner lintrunner-adapters |
Signed-off-by: Sv. Lockal <[email protected]>
merged to main branch |
At this moment intel-extension-for-pytorch build fails both with GCC 13.2.1 and Clang 17/18/19.
GCC build fails with:
c10:Half
to_Float16
. While Clang-based compilers useoperator float()
inc10:Half
and perform c10:Half->float->_Float16 conversion, gcc fails.VecOps<__m512h>
, refuses to cast float to _Float16 automatically.Clang fails with few issues:
-Wl,-Bsymbolic-functions
should be a linker attribute; when applied to compiler, considered as "unused parameter" and fails due to-Werror -Wall
combination.Common issue:
extern int FLAGS_caffe2_log_level
should not be redeclared, otherwise build fails with different redefinition when buildsystem follows contradictory recommendation to enable GLog and GFlags from Clarify use of GLog/GFlags pytorch/pytorch#44948Warnings/misc:
-Wall
overrides some previous-Wno-*
(e. g.-Wno-unused-function -Wall
), so it should precede other flagsf16
suffix in headers for f16 constants (spam in Clang/gcc)bool operator!=(const LlgaTensorDesc& desc)