~rom1v/blog { un blog libre }

Gnirehtet rewritten in Rust

Several months ago, I introduced Gnirehtet, a reverse tethering tool for Android I wrote in Java.

Since then, I rewrote it in Rust.

And it’s also open source! Download it, plug an Android device, and execute:

./gnirehtet run

(adb must be installed)

Why Rust?

At Genymobile, we wanted Gnirehtet not to require the Java Runtime Environment, so the main requirement was to compile the application to a native executable binary.

Therefore, I first considered rewriting it in C or C++. But at that time (early May), I was interested in learning Rust, after vaguely hearing what it provided, namely:

However, I had never written a line of Rust code nor heard about Rust ownership, borrowing or lifetimes.

But I am convinced that the best way to learn a programming language is to work full-time on a real project in that language.

I was motivated, so after checking that it could fit our requirements (basically, I wrote a sample using the async I/O library mio, and executed it on both Linux and Windows), I decided to rewrite Gnirehtet in Rust.

Learning Rust

During the rewriting, I devoured successively the Rust book, Rust by example and the Rustonomicon. I learned a lot, and Rust is an awesome language. I now miss many of its features when I work on a C++ project, including:

About learning, Paul Graham wrote:

Reading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists. Your mind is like a compiled program you’ve lost the source of. It works, but you don’t know why.

Some of Rust concepts (like lifetimes or move semantics by default) provided a significantly different new training set which definitely affected my model of the world (of programming).

I am not going to present all these features (just click on the links to the documentation if you are interested). Instead, I will try to explain where and why Rust resisted to the design I wanted to implement, and how to rethink the problems within Rust constraints.

The following part requires some basic knowledge of Rust. You may want to skip directly to the stats.

Difficulties

The design of the Java application was pretty effective, so I wanted to reproduce the global architecture in the Rust version (with adaptations to make it more Rust idiomatic if necessary).

But I struggled on the details, especially to make the borrow checker happy. The rules are simple:

First, any borrow must last for a scope no greater than that of the owner. Second, you may have one or the other of these two kinds of borrows, but not both at the same time:

  • one or more references (&T) to a resource,
  • exactly one mutable reference (&mut T).

However, it took me some time to realize how they conflict with some patterns or principles.

Here are my feedbacks. I selected 4 subjects which are general enough to be independent of this particular project:

Encapsulation

The borrowing rules constrain encapsulation. This was the first consequence I realized.

Here is a canonical sample:

pub struct Data {
    header: [u8; 4],
    payload: [u8; 20],
}

impl Data {
    pub fn new() -> Self {
        Self {
            header: [0; 4],
            payload: [0; 20],
        }
    }

    pub fn header(&mut self) -> &mut [u8] {
        &mut self.header
    }

    pub fn payload(&mut self) -> &mut [u8] {
        &mut self.payload
    }
}

fn main() {
    let mut data = Data::new();
    let header = data.header();
    let payload = data.payload();
}

We just create a new instance of Data, then bind mutable references to the header and payload arrays to local variables, through accessors.

However, this does not compile:

$ rustc sample.rs
error[E0499]: cannot borrow `data` as mutable more than once at a time
  --> sample.rs:21:19
   |
25 |     let header = data.header();
   |                  ---- first mutable borrow occurs here
26 |     let payload = data.payload();
   |                   ^^^^ second mutable borrow occurs here
27 | }
   | - first borrow ends here

The compiler may not assume that header() and payload() return references to disjoint data in the Data struct. Therefore, each one borrows the whole data structure. Since the borrowing rules forbid to get two mutables references to the same resource, it rejects the second call.

Sometimes, we face temporary limitations because the compiler is not smart enough (yet). This is not the case here: the implementation of header() might actually return a reference to payload, or write to the payload array, violating the borrowing rules. And the validity of a method call may not depend on the method implementation.

To fix the problem, the compiler must be able to know that the local variables header and payload reference disjoint data, for example by accessing the fields directly:

    let header = &mut data.header;
    let payload = &mut data.payload;

or by exposing a method providing both references simultaneously:

struct Data {
    fn header_and_payload(&mut self) -> (&mut [u8], &mut [u8]) {
        (&mut self.header, &mut self.payload)
    }
}

fn main() {
    let mut data = Data::new();
    let (header, payload) = data.header_and_payload();
}

Similarly, inside a struct implementation, the borrowing rules also prevent factoring code into a private method easily. Consider this (artificial) example:

pub struct Data {
    buf: [u8; 20],
    prefix_length: usize,
    sum: u32,
    port: u16,
}

impl Data {
    pub fn update_sum(&mut self) {
        let content = &self.buf[self.prefix_length..];
        self.sum = content.iter().cloned().map(u32::from).sum();
    }

    pub fn update_port(&mut self) {
        let content = &self.buf[self.prefix_length..];
        self.port = (content[2] as u16) << 8 | content[3] as u16;
    }
}

Here, the buf field is an array storing some prefix and content contiguously.

We want to factorize the way we retrieve the content slice, so that the update_*() methods are not bothered with the details. Let’s try:

 impl Data {
     pub fn update_sum(&mut self) {
-        let content = &self.buf[self.prefix_length..];
+        let content = self.content();
         self.sum = content.iter().cloned().map(u32::from).sum();
     }

     pub fn update_port(&mut self) {
-        let content = &self.buf[self.prefix_length..];
+        let content = self.content();
         self.port = (content[2] as u16) << 8 | content[3] as u16;
     }
+
+    fn content(&mut self) -> &[u8] {
+        &self.buf[self.prefix_length..]
+    }
 }

Unfortunately, this does not compile:

error[E0506]: cannot assign to `self.sum` because it is borrowed
  --> facto2.rs:11:9
   |
10 |         let content = self.content();
   |                       ---- borrow of `self.sum` occurs here
11 |         self.sum = content.iter().cloned().map(u32::from).sum();
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ assignment to borrowed `self.sum` occurs here

error[E0506]: cannot assign to `self.port` because it is borrowed
  --> facto2.rs:16:9
   |
15 |         let content = self.content();
   |                       ---- borrow of `self.port` occurs here
16 |         self.port = (content[2] as u16) << 8 & content[3] as u16;
   |

As in the previous exemple, retrieving the reference through a method borrows the whole struct (here, self).

To workaround the problem, we can explain to the compiler that the fields are disjoint:

impl Data {
    pub fn update_sum(&mut self) {
        let content = Self::content(&self.buf, self.prefix_length);
        self.sum = content.iter().cloned().map(u32::from).sum();
    }

    pub fn update_port(&mut self) {
        let content = Self::content(&self.buf, self.prefix_length);
        self.port = (content[2] as u16) << 8 | content[3] as u16;
    }

    fn content(buf: &[u8], prefix_length: usize) -> &[u8] {
        &buf[prefix_length..]
    }
}

This compiles, but totally defeats the purpose of factorization: the caller has to provide the necessary fields.

As an alternative, we can use a macro to inline the code:

macro_rules! content {
    ($self:ident) => {
        &$self.buf[$self.prefix_length..]
    }
}

impl Data {
    pub fn update_sum(&mut self) {
        let content = content!(self);
        self.sum = content.iter().cloned().map(u32::from).sum();
    }

    pub fn update_port(&mut self) {
        let content = content!(self);
        self.port = (content[2] as u16) << 8 | content[3] as u16;
    }
}

But this seems far from ideal.

I think we must just live with it: encapsulation sometimes conflicts with the borrowing rules. After all, this is not so surprising: enforcing the borrowing rules requires to follow every concrete access to resources, while encapsulation aims to abstract them away.

Observer

The observer pattern is useful for registering event listeners on an object.

In some cases, this pattern may not be straightforward to implement in Rust.

For simplicity, let’s consider that the events are u32 values. Here is a possible implementation:

pub trait EventListener {
    fn on_event(&self, event: u32);
}

pub struct Notifier {
    listeners: Vec<Box<EventListener>>,
}

impl Notifier {
    pub fn new() -> Self {
        Self { listeners: Vec::new() }
    }

    pub fn register<T: EventListener + 'static>(&mut self, listener: T) {
        self.listeners.push(Box::new(listener));
    }

    pub fn notify(&self, event: u32) {
        for listener in &self.listeners {
            listener.on_event(event);
        }
    }
}

For convenience, make closures implement our EventListener trait:

impl<F: Fn(u32)> EventListener for F {
    fn on_event(&self, event: u32) {
        self(event);
    }
}

Thus, its usage is simple:

    let mut notifier = Notifier::new();
    notifier.register(|event| println!("received [{}]", event));
    println!("notifying...");
    notifier.notify(42);

This prints:

notifying...
received [42]

So far, so good.

However, things get a bit more complicated if we want to mutate a state when an event is received. For example, let’s implement a struct storing all the events we received:

pub struct Storage {
    events: Vec<u32>,
}

impl Storage {
    pub fn new() -> Self {
        Self { events: Vec::new() }
    }

    pub fn store(&mut self, value: u32) {
        self.events.push(value);
    }

    pub fn events(&self) -> &Vec<u32> {
        &self.events
    }
}

To be able to fill this Storage on each event received, we somehow have to pass it along with the event listener, which will be stored in the Notifier. Therefore, we need a single instance of Storage to be shared between the caller code and the Notifier.

Holding two mutable references to the same object obviously violates the borrowing rules, so we need a reference-counting pointer.

However, such a pointer is read-only, so we also need a RefCell for interior mutability.

Thus, we will use an instance of Rc<RefCell<Storage>>. It may seem too verbose, but using Rc<RefCell<T>> (or Arc<Mutex<T>> for thread-safety) is very common in Rust. And there is worse.

Here is the resulting client code:

    use std::cell::RefCell;
    use std::rc::Rc;

    let mut notifier = Notifier::new();

    // first Rc to the Storage
    let rc = Rc::new(RefCell::new(Storage::new()));

    // second Rc to the Storage
    let rc2 = rc.clone();

    // register the listener saving all the received events to the Storage
    notifier.register(move |event| rc2.borrow_mut().store(event));

    notifier.notify(3);
    notifier.notify(141);
    notifier.notify(59);
    assert_eq!(&vec![3, 141, 59], rc.borrow().events());

That way, the Storage is correctly mutated from the event listener.

All is not solved, though. In this example, we had access to the Rc<RefCell<Storage>> instance. What if we only have access to the Storage, e.g. if we want Storage to register itself from one of its methods, without requiring the caller to provide the Rc<RefCell<Storage>> instance?

impl Storage {
    pub fn register_to(&self, notifier: &mut Notifier) {
        notifier.register(move |event| {
            /* how to retrieve a &mut Storage from here? */
        });
    }
}

We need to retrieve the Rc<RefCell<Storage>> from the Storage in some way.

To do so, the idea consists in making the Storage aware of its reference-counting pointer. Of course, this only makes sense if Storage is constructed inside a Rc<RefCell<Storage>>.

This is exactly what enable_shared_from_this provides in C++, so we can draw inspiration from how it works: just store a Weak<RefCell<…>>, downgraded from the Rc<RefCell<…>>, into the structure itself. That way, we can use it to get a &mut Storage reference back in the event listener:

use std::rc::{Rc, Weak};
use std::cell::RefCell;

pub struct Storage {
    self_weak: Weak<RefCell<Storage>>,
    events: Vec<u32>,
}

impl Storage {
    pub fn new() -> Rc<RefCell<Self>> {
        let rc = Rc::new(RefCell::new(Self {
            self_weak: Weak::new(), // initialize empty
            events: Vec::new(),
        }));
        // set self_weak once we get the Rc instance
        rc.borrow_mut().self_weak = Rc::downgrade(&rc);
        rc
    }

    pub fn register_to(&self, notifier: &mut Notifier) {
        let rc = self.self_weak.upgrade().unwrap();
        notifier.register(move |event| rc.borrow_mut().store(event))
    }
}

Here is how to use it:

    let mut notifier = Notifier::new();
    let rc = Storage::new();
    rc.borrow().register_to(&mut notifier);
    notifier.notify(3);
    notifier.notify(141);
    notifier.notify(59);
    assert_eq!(&vec![3, 141, 59], rc.borrow().events());

So it is possible to implement the observer pattern in Rust, but this is a bit more challenging than in Java ;-)

When possible, it might be preferable to avoid it.

Mutable data sharing

Mutable references cannot be aliased.

How to share mutable data, then?

We saw that we can use Rc<RefCell<…>> (or Arc<Mutex<…>>), that enforces the borrowing rules at runtime. However, this is not always desirable:

  • it forces a new allocation on the heap,
  • each access has a runtime cost,
  • it always borrows the whole resource.

Alternatively, we could use raw pointers manually inside unsafe code, but then this would be unsafe.

And there is another way, which consists in exposing temporary borrowing views of an object. Let me explain.

In Gnirehtet, a packet contains a reference to the raw data (stored in some buffer elsewhere) along with the IP and TCP/UDP header fields values (parsed from the raw data). We could have used a flat structure to store everything:

pub struct Packet<'a> {
    raw: &'a mut [u8],
    ipv4_source: u32,
    ipv4_destination: u32,
    ipv4_protocol: u8,
    // + other ipv4 fields
    transport_source: u16,
    transport_destination: u16,
    // + other transport fields
}

The Packet would provide setters for all the header fields (updating both the packet fields and the raw data). For example:

impl<'a> Packet<'a> {
    pub fn set_transport_source(&mut self, transport_source: u16) {
        self.transport_source = transport_source;
        let transport = &mut self.raw[20..];
        BigEndian::write_u16(&mut transport[0..2], port);
    }
}

But this would be poor design (especially since TCP and UDP header fields are different).

Instead, we would like to extract IP and transport headers to separate structs, managing their own part of the raw data:

// violates the borrowing rules

pub struct Packet<'a> {
    raw: &'a mut [u8], // the whole packet (including headers)
    ipv4_header: Ipv4Header<'a>,
    transport_header: TransportHeader<'a>,
}

pub struct Ipv4Header<'a> {
    raw: &'a mut [u8], // slice related to ipv4 headers
    source: u32,
    destination: u32,
    protocol: u8,
    // + other ipv4 fields
}

pub struct TransportHeader<'a> {
    raw: &'a mut [u8], // slice related to transport headers
    source: u16,
    destination: u16,
    // + other transport fields
}

You immediately spotted the problem: there are several references to the same resource, the raw byte array, at the same time.

Note that splitting the array is not a possibility here, since the raw slices overlap: we need to write the whole packet at once to the network, so the raw array in Packet must include the headers.

We need a solution compatible with the borrowing rules.

Here is the one I came up with:

  • store the header data separately, without the raw slices,
  • create view structs for IP and transport headers, with lifetime bounds,
  • expose Packet methods returning view instances.

And here is a simplification of the actual implementation:

pub struct Packet<'a> {
    raw: &'a mut [u8],
    ipv4_header: Ipv4HeaderData,
    transport_header: TransportHeaderData,
}

pub struct Ipv4HeaderData {
    source: u32,
    destination: u32,
    protocol: u8,
    // + other ipv4 fields
}

pub struct TransportHeaderData {
    source: u16,
    destination: u16,
    // + other transport fields
}

pub struct Ipv4Header<'a> {
    raw: &'a mut [u8],
    data: &'a mut Ipv4HeaderData,
}

pub struct TransportHeader<'a> {
    raw: &'a mut [u8],
    data: &'a mut TransportHeaderData,
}

impl<'a> Packet<'a> {
    pub fn ipv4_header(&mut self) -> Ipv4Header {
        Ipv4Header {
            raw: &mut self.raw[..20],
            data: &mut self.ipv4_header,
        }
    }

    pub fn transport_header(&mut self) -> TransportHeader {
        TransportHeader {
            raw: &mut self.raw[20..40],
            data: &mut self.transport_header,
        }
    }
}

The setters are implemented on the views, where they hold a mutable reference to the raw array:

impl<'a> TransportHeader<'a> {
    pub fn set_source(&mut self, source: u16) {
        self.data.source = source;
        BigEndian::write_u16(&mut raw[0..2], source);
    }

    pub fn set_destination(&mut self, destination: u16) {
        self.data.destination = destination;
        BigEndian::write_u16(&mut raw[2..4], destination);
    }
}

That way, the borrowing rules are respected, and the API is elegant:

    let mut packet = ;
    // "transport_header" borrows "packet" during its scope
    let mut transport_header = packet.transport_header();
    transport_header.set_source(1234);
    transport_header.set_destination(1234);

Compiler limitations

Rust is a young language, and the compiler has some annoying pitfalls.

The worst, in my opinion, is related to non-lexical lifetimes, which leads to unexpected errors:

struct Container {
    vec: Vec<i32>,
}

impl Container {
    fn find(&mut self, v: i32) -> Option<&mut i32> {
        None // we don't care the implementation
    }

    fn get(&mut self, v: i32) -> &mut i32 {
        if let Some(x) = self.find(v) {
            return x;
        }
        self.vec.push(v);
        self.vec.last_mut().unwrap()
    }
}
error[E0499]: cannot borrow `self.vec` as mutable more than once at a time
  --> sample.rs:14:9
   |
11 |         if let Some(x) = self.find(v) {
   |                          ---- first mutable borrow occurs here
...
14 |         self.vec.push(v);
   |         ^^^^^^^^ second mutable borrow occurs here
15 |         self.vec.last_mut().unwrap()
16 |     }
   |     - first borrow ends here

Hopefully, it should be fixed soon.

The Impl Trait feature, allowing to return unboxed abstract types from functions, should also improve the experience (there is also an expanded proposal).

The compiler generally produces very helpful error messages. But when it does not, they can be very confusing.

Safety pitfalls

The first chapter of the Rustonomicon says:

Safe Rust is For Reals Totally Safe.

[…]

Safe Rust is the true Rust programming language. If all you do is write Safe Rust, you will never have to worry about type-safety or memory-safety. You will never endure a null or dangling pointer, or any of that Undefined Behavior nonsense.

That’s the goal. And that’s almost true.

Leakpocalypse

In the past, it was possible to write safe-Rust code accessing freed memory.

This “leakpocalypse” led to a clarification of the safety guarantees: not running a destructor is now considered safe. In other words, memory-safety may not rely on RAII anymore (in fact, it never could, but it has been noticed only belatedly).

As a consequence, std::mem::forget is now safe, and JoinGuard has been deprecated and removed from the standard library (it has been moved to a separate crate).

Other tools relying on RAII (like Vec::drain()) must take special care to prevent memory corruption.

Whew, memory-safety is (now) safe.

Undefined infinity

In C and C++, infinite loops without side-effects are undefined behavior. This makes it possible to write programs that unexpectedly disprove Fermat’s Last Theorem.

In practice, the Rust compiler relies on LLVM, which (currently) applies its optimizations assuming that infinite loops without side-effects are undefined behavior. As a consequence, such undefined behaviors also occur in Rust.

Here is a minimal sample to trigger it:

fn infinite() {
    loop {}
}

fn main() {
    infinite();
}

Running without optimizations, it behaves as “expected”:

$ rustc ub.rs && ./ub
^C                    (infinite loop, interrupt it)

Enabling optimizations makes the program panic:

$ rustc -O ub.rs && ./ub
thread 'main' panicked at 'assertion failed: c.borrow().is_none()', /checkout/src/libstd/sys_common/thread_info.rs:51
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Alternatively, we can produce unexpected results without crashing:

fn infinite(mut value: u32) {
    // infinite loop unless value initially equals 0
    while value != 0 {
        if value != 1 {
            value -= 1;
        }
    }
}

fn main() {
    infinite(42);
    println!("end");
}
$ rustc ub.rs && ./ub
^C                    (infinite loop, interrupt it)

But with optimizations:

$ rustc -O ub.rs && ./ub
end

This is a corner case, that will probably be solved in the future. In practice, Rust safety guarantees are pretty strong (at a cost of being constraining).

Segfault

This section has been added after the publication.

There are other sources of undefined behaviors (look at the issues tagged I-unsound).

For instance, casting a float value that cannot fit into the target type is undefined behavior, which can be propagated to trigger a segfault:

#[inline(never)]
pub fn f(ary: &[u8; 5]) -> &[u8] {
    let idx = 1e100f64 as usize;
    &ary[idx..]
}

fn main() {
    println!("{}", f(&[1; 5])[0xdeadbeef]);
}
rustc -O ub.rs && ./ub
Segmentation fault

Stats

That’s all for my feedbacks about the language itself.

As an appendix, let’s compare the current Java and Rust versions of the relay server.

Number of lines

$ cloc relay-{java,rust}/src
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Rust                            29            687            655           4506
Java                            37            726            701           2931
-------------------------------------------------------------------------------

(tests included)

The Rust project is significantly bigger, for several reasons:

  • there are many borrowing views classes;
  • the Rust version contains its own selector class, wrapping the lower-level Poll, while the Java version uses the standard Selector;
  • the error handling for command-line parsing is more verbose.

The Java version has more files because the unit tests are separate, while in Rust they are in the same file as the classes they test.

Just for information, here are the results for the Android client:

$ cloc app/src
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Java                            15            198            321            875
XML                              6              7              2             76
-------------------------------------------------------------------------------
SUM:                            21            205            323            951
-------------------------------------------------------------------------------

Binary size

--------------------------------------------
Java     gnirehtet.jar                   61K
--------------------------------------------
Rust     gnirehtet                      3.0M
         after "strip -g gnirehtet"     747K
         after "strip gnirehtet"        588K
--------------------------------------------

The Java binary itself is far smaller. The comparison is not fair though, since it requires the Java Runtime Environment:

$ du -sh /usr/lib/jvm/java-1.8.0-openjdk-amd64/
156M	/usr/lib/jvm/java-1.8.0-openjdk-amd64/

Memory usage

With a single TCP connection opened, here is the memory consumption for the Java relay server:

$ sudo pmap -x $RELAY_JAVA_PID
                  Kbytes     RSS   Dirty
total kB         4364052   86148   69316

(output filtered)

And for the Rust relay server:

$ sudo pmap -x $RELAY_RUST_PID
                  Kbytes     RSS   Dirty
total kB           19272    2736     640

Look at the RSS value, which indicates the actual memory used.

As expected, the Java version consumes more memory (86Mb) than the Rust one (less than 3Mb). Moreover, its value is unstable due to the allocation of tiny objects and their garbage collection, which also generates more dirty pages. On the contrary, the Rust value is very stable: once the connection is created, there are no memory allocations at all.

CPU usage

To compare CPU usage, here is my scenario: a 500Mb file is hosted by Apache on my laptop, I start the relay server through perf stat, then I download the file from Firefox on Android. As soon as the file is downloaded, I stop the relay server (Ctrl+C).

Here are the results for the Java version:

$ perf stat -B java -jar gnirehtet.jar relay
 Performance counter stats for 'java -jar gnirehtet.jar relay':

      11805,458302      task-clock:u (msec)       #    0,088 CPUs utilized
                 0      context-switches:u        #    0,000 K/sec
                 0      cpu-migrations:u          #    0,000 K/sec
            28 618      page-faults:u             #    0,002 M/sec
    17 908 360 446      cycles:u                  #    1,517 GHz
    13 944 172 792      stalled-cycles-frontend:u #   77,86% frontend cycles idle
    18 437 279 663      instructions:u            #    1,03  insn per cycle
                                                  #    0,76  stalled cycles per insn
     3 088 215 431      branches:u                #  261,592 M/sec
        70 647 760      branch-misses:u           #    2,29% of all branches

     133,975117164 seconds time elapsed

And for the Rust version:

$ perf stat -B ./gnirehtet relay
 Performance counter stats for 'target/release/gnirehtet relay':

       2707,479968      task-clock:u (msec)       #    0,020 CPUs utilized
                 0      context-switches:u        #    0,000 K/sec
                 0      cpu-migrations:u          #    0,000 K/sec
             1 001      page-faults:u             #    0,370 K/sec
     1 011 527 340      cycles:u                  #    0,374 GHz
     2 033 810 378      stalled-cycles-frontend:u #  201,06% frontend cycles idle
       981 103 003      instructions:u            #    0,97  insn per cycle
                                                  #    2,07  stalled cycles per insn
        98 929 222      branches:u                #   36,539 M/sec
         3 220 527      branch-misses:u           #    3,26% of all branches

     133,766035253 seconds time elapsed

I am not an expert in analyzing the results, but as far as I understand from the task-clock:u value, the Rust version consumes 4× less CPU-time.

Conclusion

Rewriting Gnirehtet in Rust was an amazing experience, where I learnt a great language and new programming concepts. And now, we get a native application showing better performances.

Happy reverse tethering!

Discuss on reddit and Hacker News.

Comments

Ralf

In C and C++, infinite loops without side-effects are undefined behavior.

Actually, in C (unlike C++), such loops are only UB if the loop condition is not a constant expression. However, LLVM fails to implement this exception as is thus breaking some correct C programs. This has been reported against LLVM already more than ten years ago: https://bugs.llvm.org/show_bug.cgi?id=965.

Istvan Szekeres

Is the raw field in IPV4Header (and others) necessary? Are those accessed besides reading/writing the whole packet, including those headers?

If not, why don’t just use std::mem::transmute the convert the whole raw packet to/from the structs, right after reading / before writing the packet?

®om

@Ralf

Actually, in C (unlike C++), such loops are only UB if the loop condition is not a constant expression.

Thank you for the precision, I was not aware of this subtlety.

@Istvan Szekeres

Is the raw field in IPV4Header (and others) necessary? Are those accessed besides reading/writing the whole packet, including those headers?

In the device-to-network direction, on new connection, the received headers are copied to a buffer, with source and destination swapped, so that they are updated (lengths and checksum fields) on each packet built for the network-to-device direction.

yglukhov

Thanks for the article, was really a pleasure to read. I wonder if you could evaluate Nim lang with the same project of yours.

0nkery

Thanks for good article!

I’m wondering why you didn’t implement EventListener trait for Storage struct from your example with Observer pattern? Also, you could change a signature of on_event method to be

pub trait EventListener {
    fn on_event(&mut self, event: u32);
}

So, your event listeners are free to mutate their state based on given event. Storage struct greatly benefit from this design.

®om

I’m wondering why you didn’t implement EventListener trait for Storage struct from your example with Observer pattern?

That’s an alternative, but it would need to be implemented for Rc<RefCell<Storage>>.

Also, you could change a signature of on_event method […]

So, your event listeners are free to mutate their state based on given event.

That’s a good question, I hesitated to talk about it in the article.

The Notifier stores the listeners in a Vec<Box<EventListener>>. Some of them may need to mutate a state, some others may not.

If you define the trait method with &mut self, then the Notifier must store them in a Vec<Rc<RefCell<Box<EventListener>>>>, and always mutably borrow the RefCell (which is not free) to call on_event() on it, for all listeners.

To avoid this extra-cost, it is better to let the listeners borrow only when necessary.

Jan Hudec

Ad patterns,

The borrowing rules constrain encapsulation.

Actually, the rules mainly constrain composition without encapsulation. In the first case since you are providing accessors to the data members, it is not actually encapsulated. The internals of the struct are exposed and the methods don’t get a chance to enforce any invariants. Effectively, it is exactly as if you left the data members public.

That’s what the borrow checker does not like. But if it was actually encapsulated, each method would do a complete operation and that would not leave any outstanding borrow, so there would be no problem.

Also, unlike Java, in Rust the type is not a unit of encapsulation. Only the module is. So adding internal getters is a Java habit that does not translate well to Rust, but it is not really about encapsulation.

The observer pattern is useful for registering event listeners on an object.

In some cases, this pattern may not be straightforward to implement in Rust.

Here there does not seem to be any problem—except for the type inference deficiency—until you try to add a method to the observer to register itself.

And there I’d argue that rust does not like this because it does not really make sense. A method is supposed to take arguments and do some operation on the invocant. However, the register_to is trying to put reference to the instance somewhere and that is not responsibility of the type itself, it is responsibility of its owner.

Rust is more opinionated about owners, so it makes this hard. But I think it is not really a good idea in Java either. It should be in the owning module or the owning object and there you should have the Rc<RefCell<…>> available.

By the way, regarding the inference, I’d probably just declare type EventListener = Fn(u32) and not create a custom trait. Java didn’t have closures until recently, so the usual approach was to create special interface for each observer, but in languages that do have them (including C#, C++ etc.) the usual style is to just declare what signature is needed and use standard function-objects.

®om

Actually, the rules mainly constrain composition without encapsulation.

In my opinion, the rules are “annoying” (and unexpected, at first) when encapsulation is involved: returning something lifetime-bound to a part of a struct from a method expands the borrow to the whole struct.

In the first case since you are providing accessors to the data members, it is not actually encapsulated.

Using mutable references to the fields is the minimal way to show the problem, but it also applies to cases were actual data fields may not be exposed:

pub struct Data {
    header: [u8; 4],
    payload: [u8; 20],
}

pub struct Header<'a> {
    raw: &'a mut [u8],
}

impl Data {
    pub fn header(&mut self) -> Header {
        Header { raw: &mut self.header }
    }
}

Effectively, it is exactly as if you left the data members public.

I’d argue that even exposing methods returning mutable references to fields is a form of encapsultation: the struct layout is hidden and it is possible to change it without breaking the API, so it’s different from leaving the data members public.

For instance, I may internally refactor my Data struct this way without the callers being affected:

struct Data {
    meta: Meta,
    content: ([u8; 20], String),
}

struct Meta {
    id: u32,
    header: [u8; 4],
}

impl Data {
    pub fn new() -> Self {
        Self {
            meta: Meta { id: 0, header: [0; 4] },
            content: ([0; 20], String::new()),
        }
    }

    pub fn header(&mut self) -> &mut [u8] {
        &mut self.meta.header
    }

    pub fn payload(&mut self) -> &mut [u8] {
        &mut self.content.0
    }
}

Also, unlike Java, in Rust the type is not a unit of encapsulation. Only the module is. So adding internal getters is a Java habit that does not translate well to Rust, but it is not really about encapsulation.

The boundaries are just not the same: adding getters is still useful to expose data to another module.

However, the register_to is trying to put reference to the instance somewhere and that is not responsibility of the type itself, it is responsibility of its owner.

Here is the concrete case. In Gnirehtet, in case a poll event concerns a TCP stream receiving some data, the TCP connection builds an IP packet and send it to the client that opened this TCP stream. But when the client buffer is full, it registers itself to the client so that the client will pull the data when its buffer has space for the packet.

And there I’d argue that rust does not like this because it does not really make sense. A method is supposed to take arguments and do some operation on the invocant. […]

Rust is more opinionated about owners, so it makes this hard. But I think it is not really a good idea in Java either.

Calling addSomethingListener(this) is very (very) common in Java (especially on Android): just grep add.*Listener(this) on any Android application having UI.

Riccardo

Thanks for the post, why you end up the observer section with:

When possible, it might be preferable to avoid it

What are the pitfalls and alternatives?

Thanks again

SaO

Thanks for making this available! I was able to install it on an android tablet (as such it does not come with “USB tether” option) and I was able to get get wifi access from my linux laptop. Good!

Actually I want to use your tool to do wired vnc, but I’m a network newbie and I’d be most appreciative for your help on the details.

What I want is to my tablet as a vnc display for my tablet, but my work place would not allow wifi/bluetooth connection so I need to use USB. With gnirehtet I am able to reverse tether into the laptop.

Question: How do I determine the address/port of the connection so I can enter that info the vnc viewer? I tried running the app “network discovery” after I started the gnirehtet connection but the app could not find the address. I also run “su ifconfig -a” from terminal emulator (on the tablet) but I got a dozen entries and I could not figure out which is which.

I realize that this is an “off-label” use of your tool, but it would extremely helpful for my work. Your help and comments are most welcome. THANKS!

Comments are closed.