-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change return type of TypeTensor::operator * #1818
Change return type of TypeTensor::operator * #1818
Conversation
Don't have time to think about this in detail until I get back, but at first glance... CompareTypes served two purposes. The first was to allow us to do type deduction with only C++98 compilers, and that's mostly obsoleted by C++11 auto return types (and even better obsoleted by C++14, but for libMesh purposes trailing return types are merely annoying, not show-stopping). Doing type calculations via decltype rather than via template metaprogramming is a vast improvement. But! The second purpose of CompareTypes was to disambiguate combinations of "wrapper" types. Originally I started (albeit without decltype and thus with hideous C++98 techniques anyway) to make these kinds of sweeping " Here you're not going to fall into that trap because ScalarTraits won't have an instantiation for future Apologies if not all of the above makes sense; the heat today broke my brain. |
So the above multiplication works for me because I've also modified the #define DualNumber_op(opname, functorname, dn_first_calc, dn_second_calc, dualcalc) \
template <typename T, typename D> \
template <typename T2> \
inline DualNumber<T, D> & DualNumber<T, D>::operator opname##=(const T2 & in) \
{ \
const auto & a = *this; \
const auto & b = in; \
this->derivatives() = dn_first_calc; \
this->value() opname## = in; \
return *this; \
} \
\
template <typename T, typename D> \
template <typename T2, typename D2> \
inline DualNumber<T, D> & DualNumber<T, D>::operator opname##=(const DualNumber<T2, D2> & in) \
{ \
const auto & a = *this; \
const auto & b = in; \
this->derivatives() = dualcalc; \
this->value() opname## = in.value(); \
return *this; \
} \
\
template <typename T, size_t N, typename T2> \
inline auto operator opname(const DualNumber<T, NumberArray<N, T>> & a, \
const DualNumber<T2, NumberArray<N, T2>> & b) \
->DualNumber<decltype(a.value() opname b.value()), \
NumberArray<N, decltype(a.value() opname b.value())>> \
{ \
auto value = a.value() opname b.value(); \
auto derivatives = dualcalc; \
return {value, derivatives}; \
} \
\
template <typename T, typename T2, size_t N> \
inline auto operator opname(const T & a, const DualNumber<T2, NumberArray<N, T2>> & b) \
->DualNumber<decltype(a opname b.value()), NumberArray<N, decltype(a opname b.value())>> \
{ \
auto value = a opname b.value(); \
auto derivatives = dn_second_calc; \
return {value, derivatives}; \
} \
\
template <typename T, size_t N, typename T2> \
inline auto operator opname(const DualNumber<T, NumberArray<N, T>> & a, const T2 & b) \
->DualNumber<decltype(a.value() opname b), NumberArray<N, decltype(a.value() opname b)>> \
{ \
auto value = a.value() opname b; \
auto derivatives = dn_first_calc; \
return {value, derivatives}; \
} \
void ANONYMOUS_FUNCTION()
DualNumber_op(+, Plus, a.derivatives(), b.derivatives(), a.derivatives() + b.derivatives());
DualNumber_op(-, Minus, a.derivatives(), -b.derivatives, a.derivatives() - b.derivatives());
DualNumber_op(*,
Multiplies,
a.derivatives() * b,
a * b.derivatives(),
a.value() * b.derivatives() + a.derivatives() * b.value());
DualNumber_op(/,
Divides,
a.derivatives() / b,
-a * b.derivatives() / (b.value() * b.value()),
(b.value() * a.derivatives() - b.derivatives() * a.value()) /
(b.value() * b.value())); |
If you're changing the DualNumber stuff for MOOSE purposes, does that entail forking MetaPhysicL and risking conflicts later? So far all we use of MetaPhysicL in the main libMesh code is DynamicSparseNumberArray in the projection matrix construction, so you could change DualNumber stuff to your hearts' content for the moment without CI complaining, but if there's actually a fork required then I worry about the increased maintenance difficulties down the road. Sorry to vanish for so long, BTW, but I'm back on the clock again now. |
No worries. I think most/all of the changes I've made could be generally useful, and I would like to contribute them back to upstream MetaPhysicL. I talked briefly with @jwpeterson...what are your thoughts on having |
@roystgnr would you mind looking at this again? |
This actually makes more sense to me after our discussions on the DualNumber changes. I still have a suspicion that it'll be a blind alley in the end, that we'll want to use something like an expansion of CompareTypes to get at operation outputs more subtle than "one Scalar type nested in one non-Scalar type".. but that's our limitation already and this just makes the result more flexible. |
Currently if I multiply something like
TypeTensor<DualNumber<Real, D>>
andReal
the multiplication will fail because we have not defined aCompareTypes<DualNumber<Real, D>, Real>
specialization in libMesh. However, we can automatically deduce the return type using information we already have without creating that specialization. If folks are amenable to this change, I could also spread this to more places.