social.tchncs.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
A friendly server from Germany – which tends to attract techy people, but welcomes everybody. This is one of the oldest Mastodon instances.

Administered by:

Server stats:

3.8K
active users

#blas

0 posts0 participants0 posts today
Giuseppe Bilotta<p>Even now, Thrust as a dependency is one of the main reason why we have a <a href="https://fediscience.org/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> backend, a <a href="https://fediscience.org/tags/HIP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HIP</span></a> / <a href="https://fediscience.org/tags/ROCm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ROCm</span></a> backend and a pure <a href="https://fediscience.org/tags/CPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CPU</span></a> backend in <a href="https://fediscience.org/tags/GPUSPH" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUSPH</span></a>, but not a <a href="https://fediscience.org/tags/SYCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SYCL</span></a> or <a href="https://fediscience.org/tags/OneAPI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OneAPI</span></a> backend (which would allow us to extend hardware support to <a href="https://fediscience.org/tags/Intel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intel</span></a> GPUs). &lt;<a href="https://doi.org/10.1002/cpe.8313" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.1002/cpe.8313</span><span class="invisible"></span></a>&gt;</p><p>This is also one of the reason why we implemented our own <a href="https://fediscience.org/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved <a href="https://fediscience.org/tags/BiCGSTAB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BiCGSTAB</span></a> that I've had the opportunity to mention before &lt;<a href="https://doi.org/10.1016/j.jcp.2022.111413" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.jcp.2022.111</span><span class="invisible">413</span></a>&gt;. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is</p><p>a. too much effort<br>b. probably not worth it.</p><p>Again, following <span class="h-card" translate="no"><a href="https://peoplemaking.games/@eniko" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>eniko</span></a></span>'s original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.</p><p>6/</p>
Christos Argyropoulos<p>Question for the <a href="https://mast.hpc.social/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> crowd. Do you disable hyperthreads when you run analyses in R with a multithreaded version of <a href="https://mast.hpc.social/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> e.g. <a href="https://mast.hpc.social/tags/openblas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openblas</span></a> <a href="https://mast.hpc.social/tags/mkl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mkl</span></a> etc ?</p>
Christos Argyropoulos MD, PhD<p>Question for the <a href="https://mstdn.science/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> crowd. Do you disable hyperthreads when you run analyses in R with a multithreaded version of <a href="https://mstdn.science/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> e.g. <a href="https://mstdn.science/tags/openblas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openblas</span></a> <a href="https://mstdn.science/tags/mkl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mkl</span></a> etc ?</p>
Christos Argyropoulos MD, PhD, FASN 🇺🇸<p>Question for the <a class="hashtag" href="https://bsky.app/search?q=%23rstats" rel="nofollow noopener noreferrer" target="_blank">#rstats</a> crowd. Do you disable hyperthreads when you run analyses in R with a multithreaded version of <a class="hashtag" href="https://bsky.app/search?q=%23blas" rel="nofollow noopener noreferrer" target="_blank">#blas</a> e.g. <a class="hashtag" href="https://bsky.app/search?q=%23openblas" rel="nofollow noopener noreferrer" target="_blank">#openblas</a> <a class="hashtag" href="https://bsky.app/search?q=%23mkl" rel="nofollow noopener noreferrer" target="_blank">#mkl</a> etc ?</p>
Christos Argyropoulos MD, PhD<p>5 of these methods can leverage multithreaded (MT) <a href="https://mstdn.science/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> with a sweet spot ~ 6 threads for the 40% of the time spent in MT regions. E5-2697 has 36/72 (physical/logical) cores, so the avg case scenario is one in which 0.4x3x6 cores +2 (serial methods) tie up ~ 9.2 cores ~13% of the 72 logical cores. So far the back of envelope calculation, i.e. if I run 5 out of the 2100 design points in parallel, I will stay within 15% of resource use is holding rather well! <a href="https://mstdn.science/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://mstdn.science/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mstdn.science/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a></p>
Christos Argyropoulos MD, PhD<p>Multiple cores to the rescue as I am using a custom D-optimal design to benchmark memory/CPU utilization of 7 alternative implementations of frailty models for big data from <a href="https://mstdn.science/tags/EHRs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EHRs</span></a>. By limiting the number of models that are run simultaneously in <a href="https://mstdn.science/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> to use &lt; 30% of CPU one can treat concurrent runs as independent when evaluating the sweet spot of <a href="https://mstdn.science/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> threads (6) for these methods</p>
Christos Argyropoulos<p>Multiple cores to the rescue as I am using a custom D-optimal design to benchmark memory/CPU utilization of 7 alternative implementations of frailty models for big data from <a href="https://mast.hpc.social/tags/EHRs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EHRs</span></a>. By limiting the number of models that are run simultaneously in <a href="https://mast.hpc.social/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> to use &lt; 30% of CPU one can treat concurrent runs as independent when evaluating the sweet spot of <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> threads (6) for these methods</p>
Christos Argyropoulos MD, PhD<p>Happening now <a href="https://mstdn.science/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mstdn.science/tags/Rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rstats</span></a></p>
Christos Argyropoulos<p>Happening now <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mast.hpc.social/tags/Rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rstats</span></a></p>
Christos Argyropoulos MD PhD<p>Happening now <a href="https://mastodon.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mastodon.social/tags/Rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rstats</span></a></p>
Christos Argyropoulos MD, PhD, FASN 🇺🇸<p>Happening now <a href="https://bsky.app/search?q=%23BLAS" rel="nofollow noopener noreferrer" target="_blank">#BLAS</a> <a href="https://bsky.app/search?q=%23Rstats" rel="nofollow noopener noreferrer" target="_blank">#Rstats</a></p>
FCLC<p>Simple question: what is your *default* BLAS package? <br><a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a></p>
ct<p><a href="https://www.cs.utexas.edu/~flame/BLISRetreat2024/Talks.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cs.utexas.edu/~flame/BLISRetre</span><span class="invisible">at2024/Talks.html</span></a></p><p>Talks from the 2024 BLIS Retreat</p><p><a href="https://mastodon.content.town/tags/blis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blis</span></a> <a href="https://mastodon.content.town/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> <a href="https://mastodon.content.town/tags/appliedmath" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>appliedmath</span></a> <a href="https://mastodon.content.town/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mastodon.content.town/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mastodon.content.town/tags/scientificcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scientificcomputing</span></a></p>
ct<p><a href="https://oden.utexas.edu/news-and-events/news/BLIS-embraced-by-NVIDIA-RISC-V/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">oden.utexas.edu/news-and-event</span><span class="invisible">s/news/BLIS-embraced-by-NVIDIA-RISC-V/</span></a></p><p><a href="https://mastodon.content.town/tags/blis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blis</span></a> <a href="https://mastodon.content.town/tags/riscv" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>riscv</span></a> <a href="https://mastodon.content.town/tags/risc_v" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>risc_v</span></a> <a href="https://mastodon.content.town/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mastodon.content.town/tags/supercomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>supercomputing</span></a> <a href="https://mastodon.content.town/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> <a href="https://mastodon.content.town/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mastodon.content.town/tags/statisticalcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>statisticalcomputing</span></a> <a href="https://mastodon.content.town/tags/scientificcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scientificcomputing</span></a></p>
Methylzero<p>If you had to do a lot of dense linear algebra (QR eigenvalues, SVD, linear least squares, etc.) on modern AMD *CPUs*, which library would you choose for maximum performance? <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mast.hpc.social/tags/LAPACK" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LAPACK</span></a> <a href="https://mast.hpc.social/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mast.hpc.social/tags/NumericalSimulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NumericalSimulation</span></a> <a href="https://mast.hpc.social/tags/amd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>amd</span></a></p>
Marcus Müller<p>The annual <span class="h-card" translate="no"><a href="https://fosstodon.org/@gnuradio" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>gnuradio</span></a></span> conference just started!<br>Turing laureate Jack Dongarra is the first keynote speaker! If you ever used LAPACK, BLAS (and you did – whether you know it or not), read the top500 supercomputer list, or are just all for sharing numerical libraries – you want to head to the streams on <a href="https://grcon.stream" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">grcon.stream</span><span class="invisible"></span></a><br><a href="https://mastodon.social/tags/grcon24" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>grcon24</span></a> <a href="https://mastodon.social/tags/livestream" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>livestream</span></a> <a href="https://mastodon.social/tags/LAPACK" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LAPACK</span></a> <a href="https://mastodon.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mastodon.social/tags/FORTRAN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FORTRAN</span></a></p>
FCLC<p>Hi Friends! Little Life update!</p><p>I’m really, really excited to share I’m joining <a href="https://mast.hpc.social/tags/Tenstorrent" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tenstorrent</span></a> in September as a Field Application Engineer on the Customer Engineering Team!<br>Will be working on a few things, amongst them building a wicked fast <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> package for HPC &amp; AI users!<br><a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Habr<p>No fail, no gain: как мы исправили более миллиона тестов, проверяя оптимизацию библиотеки OpenBLAS под RISC-V</p><p>Открытая архитектура RISC-V активно развивается: в стандарт добавляются новые расширения и инструкции, разрабатываются новые ядра и SoC. Поскольку многие компании видят перспективы архитектуры и готовы использовать ее в продакшене, создается программный стек для высокопроизводительных вычислений — RISC-V HPC (High Performance Computing). Прогресс сопровождает формирование нового тренда — OpenHPC. Он заключается в технологической независимости от решений коммерческих компаний. Причем это относится не только к ПО, но и к железу. Чтобы концепция OpenHPC реализовывалась быстрее, нужно, чтобы к инициативе присоединилось как можно больше компаний, помогающих в развитии экосистемы решений для RISC-V HPC. Меня зовут Андрей Соколов, я инженер-программист в компании YADRO. В R&amp;D-команде мы поставили перед собой задачу: изучить, как можно поддержать архитектуру RISC-V со стороны библиотек линейной алгебры BLAS и LAPACK. Тестирование одной из open source-библиотек привело нас к интересным открытиям, о которых я расскажу под катом. Результаты тестов</p><p><a href="https://habr.com/ru/companies/yadro/articles/821715/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/yadro/ar</span><span class="invisible">ticles/821715/</span></a></p><p><a href="https://zhub.link/tags/openblas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openblas</span></a> <a href="https://zhub.link/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> <a href="https://zhub.link/tags/lapack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lapack</span></a> <a href="https://zhub.link/tags/%D0%BB%D0%B8%D0%BD%D0%B5%D0%B9%D0%BD%D0%B0%D1%8F_%D0%B0%D0%BB%D0%B3%D0%B5%D0%B1%D1%80%D0%B0" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>линейная_алгебра</span></a> <a href="https://zhub.link/tags/%D0%B1%D0%B8%D0%B1%D0%BB%D0%B8%D0%BE%D1%82%D0%B5%D0%BA%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>библиотеки</span></a> <a href="https://zhub.link/tags/%D0%BE%D0%BF%D1%82%D0%B8%D0%BC%D0%B8%D0%B7%D0%B0%D1%86%D0%B8%D1%8F" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>оптимизация</span></a> <a href="https://zhub.link/tags/riscv" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>riscv</span></a></p>
FCLC<p>What does this mean? It means that we now have a dedicated Matrix ASIC that can be used via standard opcodes/compilers, available to anyone with a relevant toolchain and compiler. </p><p>For the most part, expect all of your <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> kernels to gain support over time!</p><p>For <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> in contrast with most matrix tile implementations, we have spec mandated single and double precision support.</p><p>That's in contrast with the x86 AMX extensions, most consumer dGPU implementations etc. which are 19 bits and below.</p>
Habr<p>C++26 — прогресс и новинки от ISO C++</p><p>Работа в комитете по стандартизации языка C++ активно кипит. Недавно состоялось очередное заседание. Как один из участников, поделюсь сегодня с Хабром свежими новостями и описанием изменений, которые планируются в С++26. До нового стандарта C++ остаётся чуть больше года, и вот некоторые новинки, которые попали в черновик стандарта за последние две встречи: запрет возврата из функции ссылок на временное значение, [[indeterminate]] и уменьшение количества Undefined Behavior, диагностика при =delete; , арифметика насыщения, линейная алгебра (да-да! BLAS и немного LAPACK), индексирование variadic-параметров и шаблонов ...[42] , вменяемый assert(...) , и другие приятные мелочи. Помимо этого, вас ждут планы и прогресс комитета по большим фичам и многое другое. Рассмотрим новинки на примерах</p><p><a href="https://habr.com/ru/companies/yandex/articles/801115/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/yandex/a</span><span class="invisible">rticles/801115/</span></a></p><p><a href="https://zhub.link/tags/c" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>c</span></a>++ <a href="https://zhub.link/tags/%D1%81" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>с</span></a>++ <a href="https://zhub.link/tags/constexpr" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>constexpr</span></a> <a href="https://zhub.link/tags/c" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>c</span></a>++26 <a href="https://zhub.link/tags/%D1%81" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>с</span></a>++26 <a href="https://zhub.link/tags/numeric" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>numeric</span></a> <a href="https://zhub.link/tags/floating_point" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>floating_point</span></a> <a href="https://zhub.link/tags/float" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>float</span></a> <a href="https://zhub.link/tags/double" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>double</span></a> <a href="https://zhub.link/tags/iso" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>iso</span></a> <a href="https://zhub.link/tags/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>программирование</span></a> <a href="https://zhub.link/tags/span" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>span</span></a> <a href="https://zhub.link/tags/functions" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>functions</span></a> <a href="https://zhub.link/tags/function" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>function</span></a> <a href="https://zhub.link/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> <a href="https://zhub.link/tags/lapack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lapack</span></a> <a href="https://zhub.link/tags/atomic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>atomic</span></a> <a href="https://zhub.link/tags/linear_algebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linear_algebra</span></a> <a href="https://zhub.link/tags/variadic_templates" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>variadic_templates</span></a></p>