You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
C++ is faster and safer than Rust: benchmarked by Yandex
Spoiler: C++ is not faster or slower – that's not the point, actually. This article continues our good tradition of busting myths about the Rust language shared by some big-name Russian companies.
Note. This article was originally published on Habr.com. It was translated and reposted here with the author's permission.
The previous article of this series is titled "Go is faster than Rust: benchmarked by Mail.Ru (RU)". Not so long ago, I tried to lure my coworker, a C-programmer from another department, to Rust. But I failed because – I'm quoting him:
In 2019, I was at the C++ CoreHard conference, where I attended Anton @antoshkka Polukhin's talk about the indispensable C++. According to him, Rust is a young language, and it's not that fast and even not that safe.
Anton Polukhin is a representative of Russia at the C++ Standardization Committee and an author of several accepted proposals to the C++ standard. He is indeed a prominent figure and authority on everything C++ related. But his talk had a few critical factual errors regarding Rust. Let's see what they are.
The part of Anton's presentation (RU) that we are particularly interested in is 13:00 through 22:35 .
Myth 1. Rust's arithmetic is no safer than C++'s
To compare the two languages' assembly outputs, Anton picked the squaring function (link:godbolt) as an example:
We get the same assembly output. Great! We've got the baseline. Both C++ and Rust are producing the same output so far.
Indeed, arithmetic multiplication produces the same assembly listing in both cases – but only so far. The problem is – the two code fragments above do different things semantics wise. Sure, they both implement a squaring function, but for Rust the applicable range is [-2147483648, 2147483647], while for C++ it's [-46340, 46340]. How come? Magic?
The magic constants -46340 and 46340 are the largest absolute-value arguments whose squares fit in the std::int32_t type. Anything above that would lead to undefined behavior due to the signed integer overflow. If you don't believe me, ask PVS-Studio. If you are lucky enough to be on a team that has set up a CI environment with undefined behavior check, you will get the following message:
runtime error:
signed integer overflow: 46341 * 46341 cannot be represented in type 'int'
runtime error:
signed integer overflow: -46341 * -46341 cannot be represented in type 'int'
In Rust, an undefined-behavior arithmetic issue like that is literally impossible.
The undefined behavior appears due to the fact that we use a signed value and the C++ compiler assumes that signed integer values don't overflow because that would be undefined behavior. The compiler relies on this assumption to make a series of tricky optimizations. In Rust, this behavior is a documented one, but it won't make your life any easier. You'll get the same assembly code anyway. In Rust, it's a documented behavior, and multiplying two large positive numbers will produce a negative one, which is probably not what you expected. What's more, documenting this behavior prevents Rust from applying lots of its optimizations – they are actually listed somewhere on their website.
I'd like to learn more about optimizations that Rust can't do, especially considering that Rust is based on LLVM, which is the same back end that Clang is based on. Therefore, Rust has inherited "for free" and shares with C++ most of the language-independent code transformations and optimizations. The assembly listings being identical in the example above is actually just a coincidence. Tricky optimizations and undefined behavior due to signed overflows in C++ can be a lot of fun to debug and inspire articles like this one (RU). Let's take a closer look at it.
We have a function that computesa polynomial hash of a string with an integer overflow:
unsigned MAX_INT = 2147483647;
int hash_code(std::string x) {
int h = 13;
for (unsigned i = 0; i < 3; i++) {
h += h * 27752 + x[i];
}
if (h < 0) h += MAX_INT;
return h;
}
On some strings – particularly on "bye" – and only on the server (interestingly, on my friend's computer everything was fine), the function would return a negative number. But why? If the value is negative, MAX_INT is to be added to it, thus producing a positive value.
Thomas Pornin shows that undefined behavior is really undefined. If you raise the value 27752 to the power of 3, you'll understand why hash evaluation is computed correctly on two letters but ends up with some weird results on three ones.
The similar function written in Rust will work properly (link:playground):
fn hash_code(x: String) -> i32 {
let mut h = 13i32;
for i in 0..3 {
h += h * 27752 + x.as_bytes()[i] as i32;
}
if h < 0 {
h += i32::max_value();
}
return h;
}
fn main() {
let h = hash_code("bye".to_string());
println!("hash: {}", h);
}
As you can see, the documented behavior and the absence of undefined behavior due to signed overflows do make life easier.
Squaring a number is a perfect example of how you can shoot yourself in the foot with just three C++ lines. At least you can do that in a fast and optimized way. While uninitialized memory access errors could be caught by carefully examining the code, arithmetic-related bugs appear out of the blue in "purely" arithmetic code, which you don't even suspect to have anything that could be broken.
Myth 2. The only strong point of Rust is object lifetime analysis
The following code is offered as an example (link:godbolt):
Both Rust compiler and C++ compiler have compiled the application and... the bar function does nothing. Both compilers have issued warnings that something might be wrong. What am I driving at? When you hear somebody say Rust is a super cool and safe language, just know that the only safe thing about it is object lifetime analysis. UB or documented behavior that you might not expect is still there. The compiler still compiles the code that obviously doesn't make sense. Well... it's just it.
We are dealing with infinite recursion here. Again, both compilers produce the same assembly output, i.e. both C++ and Rust generate NOP for the bar function. But this is actually a bug of LLVM.
If you look at the LLVM IR of infinite-recursion code, here's what you'll see (link:godbolt):
ret i32 undef is that very bug generated by LLVM.
The bug has been present in LLVM since 2006. It's an important issue as you want to be able to mark infinite loops or recursions in such a way as to prevent LLVM from optimizing it down to nothing. Fortunately, things are improving. LLVM 6 was released with the intrinsic llvm.sideeffect added, and in 2019, rustc got the -Z insert-sideeffect flag, which adds llvm.sideeffect to infinite loops and recursions. Now infinite recursion is recognized as such (link:godbolt). Hopefully, this flag will soon be added as default to stable rustc too.
In C++, infinite recursion or loops without side effects are considered undefined behavior, so this LLVM's bug affects only Rust and C.
Now that we've cleared this up, let's address Anton's key statement: "the only safe thing about it is object lifetime analysis." This is a false statement because the safe subset of Rust enables you to eliminate errors related to multithreading, data races, and memory shots at compile-time.
Myth 3. Rust's function calls touch memory without good reason
Let's take a look at more complex functions. What does Rust do with them? We've fixed our bar function so that it calls the foo function now. You can see that Rust has generated two extra instructions: one pushes something onto the stack and the other pops something from the stack at the end. No such thing happens in C++. Rust has touched the memory twice. That's not good.
Rust's assembly output is long, but we have to find out why it differs from C++'s. In this example, Anton is using the -ftrapv flag for C++ and -C overflow-checks=on for Rust to enable the signed overflow check. If an overflow occurs, C++ will jump to the ud2 instruction, which leads to "Illegal instruction (core dumped)", while Rust jumps to the call of the core::panicking::panic function, preparation for which takes half the listing. If an overflow occurs, core::panicking::panic will output a nice explanation of why the program has crashed:
$ ./signed_overflow
thread 'main' panicked at 'attempt to multiply with overflow',
signed_overflow.rs:6:12
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
So where do these "extra" instructions touching the memory come from? The x86-64 calling convention requires that the stack must be aligned to a 16-byte boundary, while the call instruction pushes the 8-byte return address onto the stack, thus breaking the alignment. To fix that, compilers push various instructions such as push rax. It's not only Rust – C++ does that as well (link:godbolt):
Both C++ and Rust have generated identical assembly listings; both have added push rbx for the sake of stack alignment. Q.E.D.
The most curious thing is that it is actually C++ that needs deoptimization by adding the -ftrapv argument to catch undefined behavior due to signed overflows. Earlier I showed that Rust would do fine even without the -C overflow-checks=on flag, so you can check the cost of correctly working C++ code for yourself (link:godbolt) or read this article. Besides, -ftrapv is broken in gcc since 2008.
Throughout his presentation, Anton is choosing Rust code examples that compile into slightly bigger assembly code. It's true not only about the examples above, the ones "touching" the memory, but also the one discussed at 17:30 (link:godbolt):
It looks as if all this analysis of assembly output serves the purpose of proving that more assembly code means slower language.
At the CppCon conference in 2019, Chandler Carruth gave an interesting talk titled "There Are No Zero-cost Abstractions". At 17:30, you can see him complaining about std::unique_ptr being costlier than raw pointers (link:godbolt). To catch up if only a little with the assembly output's cost of raw pointers, he has to add noexcept, rvalue references and use std::move. Well, in Rust the above can work without additional effort. Let's compare two code snippets and their assembly outputs. I had to do some additional tweaking with extern"Rust" and unsafe in the Rust example to prevent the compiler from inlining the calls (link:godbolt):
With less effort, Rust generates less assembly code. And you don't need to give any clues to the compiler by using noexcept, rvalue references and std::move. When you compare languages, you should use adequate benchmarks. You can't just take any example you like and use it as proof that one language is slower than the other.
In December 2019, Rust outperformed C++ in the Benchmarks Game. C++ has caught up somewhat since then. But as long as you keep using synthetic benchmarks, the languages are going to keep pulling ahead of each other. I'd like to take a look at adequate benchmarks instead.
If we take a large desktop C++ application and try to rewrite it in Rust, we'll realize that our large C++ application uses third-party libraries. And a lot of third-party libraries written in C have C headers. You can borrow and use these headers in C++, wrapping them into safer constructs if possible. But in Rust, you'd have to rewrite all those headers or have them generated from the original C headers by some software.
Here, Anton lumps together two different issues: declaration of C functions and their subsequent use.
Indeed, declaring C functions in Rust requires you to either declare them manually or have them automatically generated – because these are two different programming languages. You can read more on that in my article about the Starcraft bot or check the example showing how to generate those wrappers.
Fortunately, Rust has a package manager called cargo, which allows you to generate declarations once and share them with the world. As you can guess, people share not only raw declarations but also safe and idiomatic wrappers. As of this year, 2020, the package registry crates.io contains about 40,000 crates.
And as for using a C library itself, it actually takes exactly one line in your config:
# Cargo.toml
[dependencies]
flate2 = "1.0"
The entire job of compiling and linking, with the version dependencies taken into account, will be done automatically by cargo. The interesting thing about the flate2 example is that when this crate only appeared, it used the C library miniz written in C, but later the community rewrote the C part in Rust. Which made flate2 faster.
All Rust checks are turned off inside unsafe blocks; it doesn't check anything within those blocks and totally relies on you having written correct code.
This one is a continuation of the issue of integrating C libraries into Rust code.
I'm sorry to say that, but belief that all checks are disabled in unsafe is a typical misconception since Rust documentation clearly says that unsafe allows you to:
Dereference a raw pointer;
Call and declare unsafe functions;
Access or modify a mutable static variable;
Implement and declare an unsafe trait;
Access fields of unions.
Not a word about disabling all Rust checks. If you have lifetime errors, simply adding unsafe won't help your code compile. Inside that block, the compiler keeps checking types, tracing variables' lifetimes, checking thread safety, and so on and so forth. For more detail, see the article "You can't "turn off the borrow checker" in Rust".
You shouldn't treat unsafe as a way to "do what you please". This is a clue to the compiler that you take responsibility for a specific set of invariants that the compiler itself can't check. Take raw pointer dereferencing, for example. You and I know that C's malloc returns either NULL or a pointer to an allocated block of uninitialized memory, but the Rust compiler knows nothing about this semantics. That's why, when working with a raw pointer returned by malloc, you have to tell the compiler, "I know what I'm doing. I've checked this one – it's not a null; the memory is correctly aligned for this data type." You take responsibility for that pointer in the unsafe block.
Out of ten bugs I've encountered in C++ programs over the past month, three were caused by incorrect handling of C methods: forgetting to free memory, passing a wrong argument, passing a null pointer without a prior null check. There are lots of problems exactly with using C code. And Rust isn't going to help you with that at all. That's not good. Rust is allegedly much safer, but once you start using third-party libraries, you have to watch your step as carefully as with C++.
According to Microsoft's statistics, 70% of vulnerabilities are due to memory safety issues and other error types, which Rust actually prevents at compilation. You physically can't make those errors in the safe subset of Rust.
On the other hand, there is the unsafe subset, which allows you to dereference raw pointers, call C functions... and do other unsafe things that could break your program if misused. Well, that's exactly what makes Rust a system programming language.
At this point, you might find yourself thinking that having to make sure to keep your C function calls safe in Rust just as much as in C++ doesn't make Rust any better. But what makes Rust unique is the ability to separate safe code from potentially unsafe code with subsequent encapsulation of the latter. And if you can't guarantee correct semantics at the current level, you need to delegate unsafe to the calling code.
This is how delegation of unsafe upward is done in practice:
// Warning:
// Calling this method with an out-of-bounds index is undefined behavior.
unsafe fn unchecked_get_elem_by_index(elems: &[u8], index: usize) -> u8 {
*elems.get_unchecked(index)
}
slice::get_unchecked is a standard unsafe function that receives an element by index without checking for the out-of-bounds error. Since we don't check the index in our function get_elem_by_index either and pass it as-is, our function is potentially buggy and any access to it requires that we explicitly specify it as unsafe (link:playground):
// Warning:
// Calling this method with an out-of-bounds index is undefined behavior.
unsafe fn unchecked_get_elem_by_index(elems: &[u8], index: usize) -> u8 {
*elems.get_unchecked(index)
}
fn main() {
let elems = &[42];
let elem = unsafe { unchecked_get_elem_by_index(elems, 0) };
dbg!(elem);
}
If you pass an index that is out of bounds, you'll be accessing uninitialized memory The unsafe block is the only place where you can do that.
However, we can still use this unsafe function to build a safe version (link:playground):
// Warning:
// Calling this method with an out-of-bounds index is undefined behavior.
unsafe fn unchecked_get_elem_by_index(elems: &[u8], index: usize) -> u8 {
*elems.get_unchecked(index)
}
fn get_elem_by_index(elems: &[u8], index: usize) -> Option<u8> {
if index < elems.len() {
let elem = unsafe { unchecked_get_elem_by_index(elems, index) };
Some(elem)
} else {
None
}
}
fn main() {
let elems = &[42];
let elem = get_elem_by_index(elems, 0);
dbg!(&elem);
}
This safe version will never disrupt the memory, no matter what arguments you pass to it. Let's make this clear – I'm not encouraging you to write code like that in Rust at all (use the slice::get function instead); I'm simply showing you how you can move from Rust's unsafe subset to the safe subset still being able to guarantee safety. We could use a similar C function instead of unchecked_get_elem_by_index.
Thanks to the cross-language LTO, the call of a C function can be absolutely free:
I uploaded the project with enabled compiler flags to github. The resulting assembly output is identical to the code written in pure C (link:godbolt) but is guaranteed to be safe as code written in Rust.
Suppose we have a wonderful programming language called X. It's a mathematically verified programming language. If our application written in this X language happens to build, it will mean that it has been mathematically proved that our application doesn't have any bugs in it. Sounds great indeed. But there's a problem. We use C libraries, and when we use them from that X language, all our mathematical proof obviously kind of breaks down.
The correctness of Rust's type system, mechanisms of borrowing, ownership, lifetimes, and concurrency was proved in 2018. Given a program that is syntactically well-typed except for certain components that are only semantically (but not syntactically) well-typed, the fundamental theorem tells us that the entire program is semantically well-typed.
It means that linking and using a crate (library) that contains unsafes but provides correct and safe wrappers won't make your code unsafe.
As a practical use of this model, its authors proved the correctness of some primitives of the standard library, including Mutex, RwLock, and thread::spawn, all of which use C functions. Therefore, you can't accidentally share a variable between threads without synchronization primitives in Rust; and if you use Mutex from the standard library, the variable will always be accessed correctly even though their implementation relies on C functions. Isn't it great? Definitely so.
Conclusion
Unbiased discussion of the relative advantages of one programming language over another is difficult, especially when you have a strong liking for one language and dislike the other. It's a usual thing to see a prophet of yet another "C++ killer" show up making strong statements without knowing much about C++ and expectedly come under fire.
But what I expect from acknowledged experts is weighted observation that at least doesn't contain serious factual errors.
C++ is faster and safer than Rust: benchmarked by Yandex
https://ift.tt/FzdHxBS
May 10 2020
C++ is faster and safer than Rust: benchmarked by Yandex
Spoiler: C++ is not faster or slower – that's not the point, actually. This article continues our good tradition of busting myths about the Rust language shared by some big-name Russian companies.
Note. This article was originally published on Habr.com. It was translated and reposted here with the author's permission.
The previous article of this series is titled "Go is faster than Rust: benchmarked by Mail.Ru (RU)". Not so long ago, I tried to lure my coworker, a C-programmer from another department, to Rust. But I failed because – I'm quoting him:
Anton Polukhin is a representative of Russia at the C++ Standardization Committee and an author of several accepted proposals to the C++ standard. He is indeed a prominent figure and authority on everything C++ related. But his talk had a few critical factual errors regarding Rust. Let's see what they are.
The part of Anton's presentation (RU) that we are particularly interested in is 13:00 through 22:35 .
Myth 1. Rust's arithmetic is no safer than C++'s
To compare the two languages' assembly outputs, Anton picked the squaring function (link:godbolt) as an example:
Anton (13:35):
Indeed, arithmetic multiplication produces the same assembly listing in both cases – but only so far. The problem is – the two code fragments above do different things semantics wise. Sure, they both implement a squaring function, but for Rust the applicable range is [-2147483648, 2147483647], while for C++ it's [-46340, 46340]. How come? Magic?
The magic constants -46340 and 46340 are the largest absolute-value arguments whose squares fit in the std::int32_t type. Anything above that would lead to undefined behavior due to the signed integer overflow. If you don't believe me, ask PVS-Studio. If you are lucky enough to be on a team that has set up a CI environment with undefined behavior check, you will get the following message:
In Rust, an undefined-behavior arithmetic issue like that is literally impossible.
Let's see what Anton has to say about it (13:58):
I'd like to learn more about optimizations that Rust can't do, especially considering that Rust is based on LLVM, which is the same back end that Clang is based on. Therefore, Rust has inherited "for free" and shares with C++ most of the language-independent code transformations and optimizations. The assembly listings being identical in the example above is actually just a coincidence. Tricky optimizations and undefined behavior due to signed overflows in C++ can be a lot of fun to debug and inspire articles like this one (RU). Let's take a closer look at it.
We have a function that computesa polynomial hash of a string with an integer overflow:
On some strings – particularly on "bye" – and only on the server (interestingly, on my friend's computer everything was fine), the function would return a negative number. But why? If the value is negative, MAX_INT is to be added to it, thus producing a positive value.
Thomas Pornin shows that undefined behavior is really undefined. If you raise the value 27752 to the power of 3, you'll understand why hash evaluation is computed correctly on two letters but ends up with some weird results on three ones.
The similar function written in Rust will work properly (link:playground):
Due to the well-known reasons, this code executes differently in Debug and Release modes, and if you want to unify the behavior, you can use these functions families: wrapping*, saturating*, overflowing*, and checked*.
As you can see, the documented behavior and the absence of undefined behavior due to signed overflows do make life easier.
Squaring a number is a perfect example of how you can shoot yourself in the foot with just three C++ lines. At least you can do that in a fast and optimized way. While uninitialized memory access errors could be caught by carefully examining the code, arithmetic-related bugs appear out of the blue in "purely" arithmetic code, which you don't even suspect to have anything that could be broken.
Myth 2. The only strong point of Rust is object lifetime analysis
The following code is offered as an example (link:godbolt):
Anton (15:15):
We are dealing with infinite recursion here. Again, both compilers produce the same assembly output, i.e. both C++ and Rust generate NOP for the bar function. But this is actually a bug of LLVM.
If you look at the LLVM IR of infinite-recursion code, here's what you'll see (link:godbolt):
ret i32 undef is that very bug generated by LLVM.
The bug has been present in LLVM since 2006. It's an important issue as you want to be able to mark infinite loops or recursions in such a way as to prevent LLVM from optimizing it down to nothing. Fortunately, things are improving. LLVM 6 was released with the intrinsic llvm.sideeffect added, and in 2019, rustc got the -Z insert-sideeffect flag, which adds llvm.sideeffect to infinite loops and recursions. Now infinite recursion is recognized as such (link:godbolt). Hopefully, this flag will soon be added as default to stable rustc too.
In C++, infinite recursion or loops without side effects are considered undefined behavior, so this LLVM's bug affects only Rust and C.
Now that we've cleared this up, let's address Anton's key statement: "the only safe thing about it is object lifetime analysis." This is a false statement because the safe subset of Rust enables you to eliminate errors related to multithreading, data races, and memory shots at compile-time.
Myth 3. Rust's function calls touch memory without good reason
Anton (16:00):
Here's the example (link:godbolt):
Rust's assembly output is long, but we have to find out why it differs from C++'s. In this example, Anton is using the -ftrapv flag for C++ and -C overflow-checks=on for Rust to enable the signed overflow check. If an overflow occurs, C++ will jump to the ud2 instruction, which leads to "Illegal instruction (core dumped)", while Rust jumps to the call of the core::panicking::panic function, preparation for which takes half the listing. If an overflow occurs, core::panicking::panic will output a nice explanation of why the program has crashed:
So where do these "extra" instructions touching the memory come from? The x86-64 calling convention requires that the stack must be aligned to a 16-byte boundary, while the call instruction pushes the 8-byte return address onto the stack, thus breaking the alignment. To fix that, compilers push various instructions such as push rax. It's not only Rust – C++ does that as well (link:godbolt):
Both C++ and Rust have generated identical assembly listings; both have added push rbx for the sake of stack alignment. Q.E.D.
The most curious thing is that it is actually C++ that needs deoptimization by adding the -ftrapv argument to catch undefined behavior due to signed overflows. Earlier I showed that Rust would do fine even without the -C overflow-checks=on flag, so you can check the cost of correctly working C++ code for yourself (link:godbolt) or read this article. Besides, -ftrapv is broken in gcc since 2008.
Myth 4. Rust is slower than C++
Anton (18:10):
Throughout his presentation, Anton is choosing Rust code examples that compile into slightly bigger assembly code. It's true not only about the examples above, the ones "touching" the memory, but also the one discussed at 17:30 (link:godbolt):
It looks as if all this analysis of assembly output serves the purpose of proving that more assembly code means slower language.
At the CppCon conference in 2019, Chandler Carruth gave an interesting talk titled "There Are No Zero-cost Abstractions". At 17:30, you can see him complaining about std::unique_ptr being costlier than raw pointers (link:godbolt). To catch up if only a little with the assembly output's cost of raw pointers, he has to add noexcept, rvalue references and use std::move. Well, in Rust the above can work without additional effort. Let's compare two code snippets and their assembly outputs. I had to do some additional tweaking with extern "Rust" and unsafe in the Rust example to prevent the compiler from inlining the calls (link:godbolt):
With less effort, Rust generates less assembly code. And you don't need to give any clues to the compiler by using noexcept, rvalue references and std::move. When you compare languages, you should use adequate benchmarks. You can't just take any example you like and use it as proof that one language is slower than the other.
In December 2019, Rust outperformed C++ in the Benchmarks Game. C++ has caught up somewhat since then. But as long as you keep using synthetic benchmarks, the languages are going to keep pulling ahead of each other. I'd like to take a look at adequate benchmarks instead.
Myth 5. C → C++ — noop, C → Rust — PAIN!!!!!!!
Anton (18:30):
Here, Anton lumps together two different issues: declaration of C functions and their subsequent use.
Indeed, declaring C functions in Rust requires you to either declare them manually or have them automatically generated – because these are two different programming languages. You can read more on that in my article about the Starcraft bot or check the example showing how to generate those wrappers.
Fortunately, Rust has a package manager called cargo, which allows you to generate declarations once and share them with the world. As you can guess, people share not only raw declarations but also safe and idiomatic wrappers. As of this year, 2020, the package registry crates.io contains about 40,000 crates.
And as for using a C library itself, it actually takes exactly one line in your config:
The entire job of compiling and linking, with the version dependencies taken into account, will be done automatically by cargo. The interesting thing about the flate2 example is that when this crate only appeared, it used the C library miniz written in C, but later the community rewrote the C part in Rust. Which made flate2 faster.
Myth 6. unsafe turns off all Rust checks
Anton (19:14):
This one is a continuation of the issue of integrating C libraries into Rust code.
I'm sorry to say that, but belief that all checks are disabled in unsafe is a typical misconception since Rust documentation clearly says that unsafe allows you to:
Not a word about disabling all Rust checks. If you have lifetime errors, simply adding unsafe won't help your code compile. Inside that block, the compiler keeps checking types, tracing variables' lifetimes, checking thread safety, and so on and so forth. For more detail, see the article "You can't "turn off the borrow checker" in Rust".
You shouldn't treat unsafe as a way to "do what you please". This is a clue to the compiler that you take responsibility for a specific set of invariants that the compiler itself can't check. Take raw pointer dereferencing, for example. You and I know that C's malloc returns either NULL or a pointer to an allocated block of uninitialized memory, but the Rust compiler knows nothing about this semantics. That's why, when working with a raw pointer returned by malloc, you have to tell the compiler, "I know what I'm doing. I've checked this one – it's not a null; the memory is correctly aligned for this data type." You take responsibility for that pointer in the unsafe block.
Myth 7. Rust won't help you with C libraries
Anton (19:25):
According to Microsoft's statistics, 70% of vulnerabilities are due to memory safety issues and other error types, which Rust actually prevents at compilation. You physically can't make those errors in the safe subset of Rust.
On the other hand, there is the unsafe subset, which allows you to dereference raw pointers, call C functions... and do other unsafe things that could break your program if misused. Well, that's exactly what makes Rust a system programming language.
At this point, you might find yourself thinking that having to make sure to keep your C function calls safe in Rust just as much as in C++ doesn't make Rust any better. But what makes Rust unique is the ability to separate safe code from potentially unsafe code with subsequent encapsulation of the latter. And if you can't guarantee correct semantics at the current level, you need to delegate unsafe to the calling code.
This is how delegation of unsafe upward is done in practice:
slice::get_unchecked is a standard unsafe function that receives an element by index without checking for the out-of-bounds error. Since we don't check the index in our function get_elem_by_index either and pass it as-is, our function is potentially buggy and any access to it requires that we explicitly specify it as unsafe (link:playground):
If you pass an index that is out of bounds, you'll be accessing uninitialized memory The unsafe block is the only place where you can do that.
However, we can still use this unsafe function to build a safe version (link:playground):
This safe version will never disrupt the memory, no matter what arguments you pass to it. Let's make this clear – I'm not encouraging you to write code like that in Rust at all (use the slice::get function instead); I'm simply showing you how you can move from Rust's unsafe subset to the safe subset still being able to guarantee safety. We could use a similar C function instead of unchecked_get_elem_by_index.
Thanks to the cross-language LTO, the call of a C function can be absolutely free:
I uploaded the project with enabled compiler flags to github. The resulting assembly output is identical to the code written in pure C (link:godbolt) but is guaranteed to be safe as code written in Rust.
Myth 8. Rust's safety isn't proved
Anton (20:38):
The correctness of Rust's type system, mechanisms of borrowing, ownership, lifetimes, and concurrency was proved in 2018. Given a program that is syntactically well-typed except for certain components that are only semantically (but not syntactically) well-typed, the fundamental theorem tells us that the entire program is semantically well-typed.
It means that linking and using a crate (library) that contains unsafes but provides correct and safe wrappers won't make your code unsafe.
As a practical use of this model, its authors proved the correctness of some primitives of the standard library, including Mutex, RwLock, and thread::spawn, all of which use C functions. Therefore, you can't accidentally share a variable between threads without synchronization primitives in Rust; and if you use Mutex from the standard library, the variable will always be accessed correctly even though their implementation relies on C functions. Isn't it great? Definitely so.
Conclusion
Unbiased discussion of the relative advantages of one programming language over another is difficult, especially when you have a strong liking for one language and dislike the other. It's a usual thing to see a prophet of yet another "C++ killer" show up making strong statements without knowing much about C++ and expectedly come under fire.
But what I expect from acknowledged experts is weighted observation that at least doesn't contain serious factual errors.
Many thanks to Dmitry Kashitsin and Aleksey Kladov for reviewing this article.
via PVS-Studio
October 12, 2022 at 03:27PM
The text was updated successfully, but these errors were encountered: