Mozilla Engineer Josh Matthews (@lastontheboat) will give an overview of Servo and Rust roadmaps; where to find assets, demos, other resources; and ideas for how to engage audience and community. And, even if you are not a systems programmer, plan to come away with some talking points for why these projects are important to Firefox, Mozilla and the health of the internet.
Views since archived: 185
- Okay everybody,
good morning, good afternoon, good evening,
welcome from Air Mozilla land.
Michael and I are here today in Mountain View.
We'll be face muting shortly
and turning the floor over to Josh Matthews
who works on Servo and Rust,
and he'll tell you a little bit more about himself.
He's live from Toronto where it's just after lunchtime.
Watch me face mute.
And without further ado.
We'll be saving questions till after Josh's presentation,
and enjoy the show, thanks.
- Hi folks.
So my work is officially I am part of the Servo team
under the research organization at Mozilla,
but that means that I am
both in charge of leading some people
and figuring out what we should be working on for Servo,
but also I do a lot of work
with building our contributor engagement story
and making it as easy as possible
for volunteers to contribute to Servo,
as well as working with universities
to set up partnerships with professors and students there
so that students get access
to making meaningful contributions to open source work
and gain real world experience,
and Servo gains additional features because of their work.
So today I will be covering
a relatively high level overview of what Servo is,
why we're working on it, what its future is,
why we chose to build it in Rust,
and what Rust gives us
that traditional programming languages do not,
and I'll cover like how other people could get involved
in learning Rust and getting to use it
and contributing to Servo
if that is something they're interested in.
So let's start.
So Servo is a browser engine,
and that is different than a browser.
Firefox is a browser.
Firefox has a browser engine inside of it called Gecko,
and I will keep referring to that
throughout this talk that I'm giving.
However, we also confusingly refer to Servo
as something you can also download and run,
because we started releasing nightly builds
of Servo last summer,
and there's a link there
which will take you to download.servo.org.
And we'll run this kind of slick demo,
which I will attempt to show you right now.
I will share this screen instead.
So if you go to download.servo.org
and you're running Mac or Linux and you download it
you'll be able to open it,
and you'll get this slick
new browser interface that we're experimenting with.
Basically this is something called browser.html,
which is built in the same way
that the browser application for Firefox OS was built.
with some special new APIs that we put into Servo
in order to facilitate it,
but it means that we can see
what running Servo looks like under the hood
in a slick interface.
GitHub is able to load,
and if we visit the Servo project page
it will look like you might expect.
Meanwhile, there's an interface on the side
that brings up a menu,
and you can load a new tab in that.
And maybe go to DuckDuckGo
and search for your favorite thing.
But basically we have an interface around the browser engine
which allows people to experience
something that they're more familiar with,
but when we talk about Servo
we usually mean the actual engine
that's running inside of that.
Let's go back to my slides now.
So let's talk about what a browser engine
actually means in that case.
Basically it is the black box
which takes in URLs of webpages
and spits out the actual rendered page
that has all the content that you expect to see,
and then it will take in things
like mouse clicks or pressing keys
and figure out what should happen in response
and then show you what the output looks like
in response to that.
So the engine is made up of a bunch of different parts,
which I believe Mike may have touched on last week,
so I'll go through it fairly quickly.
Basically it encompasses making the request
for a page from the network,
then taking the HTML input and parsing that into a tree,
and then also figuring out the resources
that that document loads, fetching those resources like CSS,
parsing the CSS, applying it to the elements of the tree
based on the CSS rules,
then taking that and saying,
"Okay, given all these rules that are applied to elements,
"I can then turn these into boxes
"that get positioned on the page,
"and draw images, and draw text."
And once you have all of those things
that formulates a list of display items, basically.
Then those get rendered,
and that gives you a single picture
which is the content of the page.
and figuring out when, or being given input from the user
and figuring out what happens in response to that,
and running timers, and continuously refreshing the page,
or refreshing the rendering of the page.
So the browser engine is all the things
that sort of just happen on their own
without the user doing things.
And then the browser that encompasses the engine
then provides the actual input
and the interface around that.
So you might be tempted to ask,
"Why do we have a new engine
"when we already have this venerable old engine
"that's running Firefox and is doing a great job?"
So the history of Firefox is interesting,
because it was one of the earliest browsers,
and that means that there's a lot of technical baggage,
which wouldn't necessarily make sense
for a browser that is being started from the ground up,
because in 10 to 15 years, there's very different,
like the web looks very different
than it was when it started.
The actual web content has moved
from mostly being text with some images
to being applications, and fancy graphical effects,
and people expect certain performance of it
based on their experience of native applications.
Additionally, the hardware
that we run browsers on is very different.
Not only do we have much more powerful desktop machines
than we had back in 2007,
we also have mobile devices, which are small
and have very different power characteristics,
and have restrictions around battery life
that you want to take into account.
Additionally, like other browsers
have simply done things differently
by having the virtue of being created after Firefox
and seeing the choices Firefox made
and saying, "No, I don't think we'll do that."
For example, when Chrome was first released,
they had a model of separate processes,
which is something that Firefox
has been working on integrating
since I started contributing back in 2009,
and we're only getting there now.
But Chrome was able to do it from the beginning.
Webkit had an easier time of it.
There's just a lot of changes
that could be made with hindsight.
But what's interesting is that
the things that websites can do have changed drastically.
We have things like WebRTC,
we have things like web audio and video,
and there's transformations you can do with CSS,
and animations that just are very,
they effect the kind of decisions you make
when you're designing a web browser engine.
So taking a product like Firefox
and retrofitting all of this new functionality into it,
and doing that in a way that is maintainable over time,
is a very tough challenge.
It's something that we continue to do in Firefox
as we keep updating it to keep up with competitors.
However, experimenting with that,
making tough calls, making exploratory technical changes
in such a product is really difficult,
because you're risking breaking backwards compatibility
with existing web content,
you have a huge user base,
you have extensions to think about.
Like doing experiments in Firefox is very difficult
if you do not freeze the browser in its current state,
because meanwhile it has to continue keeping up
with everything else that's going on.
So this is why we're starting from the ground up.
We're using Servo as a chance to really just do it better.
We're doing things like we're building
with multiple CPUs in mind from the beginning.
A really common complaint people have about Firefox
is that it doesn't make use of multiple threads.
Everything happens on a single thread,
and that's why we need to split it into multiple processes,
because that will allow us to separate web content
from the user interface.
But Servo is building it so that each piece of web content
gets its own thread,
and this will allow us
to just have a better design from the beginning.
Additionally, there's things that are present
in Gecko and other browser engines
that are just a constant sort of unsafety,
and a constant source of vulnerabilities and exploits,
that are just very difficult to solve
in a complete and thorough manner.
It often turns into let's fix the most recent security hole
that was discovered,
and we'll try to think of ways
to avoid that happening again.
But with Servo, we're looking for ways
to actually just negate entire classes
of security vulnerabilities.
Additionally, we're taking the choices
that our competitors have made
in terms of how they designed their engine,
and we're saying, "Okay, that looks like
"it was a very good choice.
"It would give us better performance,
"or better memory usage, or something,"
and we're not just blindly following
all of the choices Firefox made,
because we have the option to design things differently
and see how much difference it makes.
Finally, we're building things in a way
that all of the individual sub systems of Servo
could theoretically be extracted.
So this allows us to have much cleaner and clearer designs,
and also enables us to do more
with these choices we're making later down the line,
and I will get to that in a bit.
So the question next is what is the point,
or what is the purpose of Servo?
We understand what the point is,
and that's to explore the choices
that are very difficult to make in Firefox itself right now,
but what will we do with those choices?
So I'll tell you upfront that it is a non-goal.
We are not going to just
suddenly replace the engine of Firefox with Servo.
That is unrealistic,
because Servo is developed by a much smaller team,
has had much less time to mature,
is way behind Firefox in terms of feature completeness
and web compatibility.
It's the things that are essential
for maintaining a user base.
So it is not feasible to consider
just abandoning Firefox as it is right now
and swapping in Servo instead.
So what are we going to do instead?
So there's a few things.
It's basically we're going to be doing
We have things we want to try in Servo,
and things we are trying in Servo,
and we're identifying precisely
what we think we can do better.
We're then building the pieces of that
that are necessary in order to actually make
realistic comparisons against other browsers,
so against Chrome, against Firefox,
so we can actually show this is as good in this respect,
or this is this much slower
or this much worse in this respect
so that we can actually make meaningful comparisons.
And when doing that, at some point we'll say,
"Okay, we've done this amount of work.
"We have these pieces of data
"showing that it's this much better or worse.
"Therefore this experiment is paying off or not."
And that will allow us to decide
whether it's actually worth continuing investing in,
and if it is, then we look at,
"Okay, what would it take in order to actually integrate
"this particular experiment into Firefox
"as a production component of a real web browser engine?"
And that is happening right now.
So some of the experiments that we're doing with Servo
include the style system,
and the rendering engine, and URL parsing.
So I know that Mike talked to you
about things related to Quantum last week.
So style system was built in Servo
in a way that we could take the code
that just like applies CSS rules
to the elements of the DOM tree,
and that we could extract it and put it into Firefox.
And we were having enough,
or we created enough benchmark showing
that we could really get
a significant performance improvement
from doing that replacement.
At that point all that was missing
was a lot of integration work and feature completeness
that Servo didn't have yet.
So now there's a team
that has integrated the code from Servo into Firefox
and it's being updated everyday.
As changes get made to Servo,
the code gets propagated
over to the Mozilla Central Code Repository automatically,
and it gets run on the tests for Firefox,
so that every time we make an improvement in Servo
then Firefox benefits automatically.
In a similar way, the same thing has happened
with the Web Render project.
So Web Render was a really interesting experiment
by one of the Servo team members.
Basically, Glenn came from a background
of building computer games,
and the rendering engines for computer games
it turns out there's a very common way of designing them
which is very different
than the way that browsers do their rendering.
Honestly, this is not an area of my experience at all,
so I can't give a great explanation for what's different,
but basically Glenn discovered
that if he wrote a rendering engine
which could take the input from Servo
saying these are the items that need to be displayed,
then he could do an engine that was more like
one that would be used in a computer game
and get significantly better performance
in terms of rendering
and having lots of things moving around on the screen
at the same time,
and having lots of visual effects, like opacity and filters,
like all of these things just wouldn't harm performance
when rendering like they would
in traditional browser engines.
I think I have a link to the demo that Mike showed last week
with the difference between having like thousands of objects
spinning on a page in other browser engines,
versus in Servo using Web Render,
and it's just quite impressive
the way that that can change the restrictions
that web developers are used to.
One thing we want to do with Servo
is make it so that the traditional advice
for how to build webpages that are performant
shouldn't be necessary.
We want to make it so that
you can just build the pages that make sense,
and you don't have to check off the list
of things you're doing that could slow down the browser.
So finally, we're also integrating a URL parser
that's written in Rust into Firefox.
So that's currently running in nightly
the idea being that the URL parser is a frequent source,
or not frequent, but occasional source
of security vulnerabilities,
and there's no reason for that to be the case.
So we've built one in Rust
which conforms to the URL specification.
We've been using it in Servo and it's now,
it's part of Firefox, but it's not the default yet.
They're still looking into what it would take
to make that switch.
So these are all things which have been,
they started as experiments in Servo where we said,
"Okay, we believe we can do it better.
"We believe we can do it more in parallel.
"Or, we can avoid the problems
"that plague every other browser engine."
We got it to the point where we demonstrated
that it was working on traditional web content.
Then we looked at, "Okay, what would it take
"in order to integrate that into Firefox?"
Because we had built it in this modular fashion,
we're actually able to extract that particular piece
and integrate it with Firefox.
So experiments that are less mature
include some really interesting ones.
So one is parallel layout,
and parallel layout is basically saying
if you have a webpage,
the way that browser engines work traditionally
is that they start at the top of the tree
and they layout that particular element
based on its children.
So you have to go down all the way to the bottom of a tree
and layout what are called the leaf nodes,
the nodes that don't have any children.
You figure out their position,
and you figure out their size,
and then you go up to the parent and you say,
"Okay, given your children's sizes and positions
"and the CSS rules that are applied to you,
"then therefore we know that you are laid out here,"
and then you can go to those siblings
and figure out where their position,
and then go up to the parent.
So it's this bottom up traversal that really is blocked
on always having the information available
for the previous thing you laid out.
But, we looked at web content,
and there's a lot of content
which really doesn't necessarily need to know its siblings,
or certain children or parents or something,
and that it could be laid out in isolation,
which means you could actually lay it out in parallel.
So we've been working on making it possible to...
(Sorry, my nose is quite itchy.)
We've been working on making it possible
to layout the elements of a page using multiple CPUs
when we can determine
that they don't actually need information about elements
that have not been laid out yet
or are being laid out in parallel.
So we're able to split up the page
into sort of isolated groups like this and lay them out,
and therefore make better use of the cores in your computer.
This can actually benefit in other ways, too,
because there's experiments showing that with mobile devices
you can either run using all the cores at full clock speed,
and you'll get some amount of performance improvement
over just using a single core
and doing the model I talked about previously
about starting from the bottom
working your way up to the top.
However, you can also layout
in parallel using all the cores,
but running the cores at less than full power.
It turns out that that can actually end up saving
some amount of power that is,
there's like a quadratic relationship or something,
but basically, you end up saving more power
but taking the same amount of time
as using only a single core.
So that can be very useful for battery life
if we can get that working reliably.
Other experiments we're running.
These all link to blog posts about these particular projects
that are sort of interesting from a research standpoint
where we don't necessarily know that it's the best way,
but we think that there are benefits
we can derive from them.
Patrick, one of our team members,
just released a Rust library
which can basically take text and fonts
and use your graphics card
to actually make them, or draw them,
which can be a significant improvement in performance
over doing it without the graphics card.
So there's also other benefits that Servo gives us,
because unlike other browsers who basically were built
and then there were really pushes
to try to standardize behavior across browsers,
so then the standards bodies said,
"Okay, given what all these browsers do,
"what is the most common behavior
"that we can standardize on
"and then get everyone to adapt to that?"
So standards were written
in response to existing behavior in many cases
with some of the older standards,
whereas now Servo is coming through
and implementing features
for the first time from these standards,
and then actually validating
whether these standards describe what is actually happening,
because we're writing extra automated tests,
and other browsers are sharing the same tests with us
and writing their own tests,
and everyone is benefiting from the fact
that there's a new browser coming through
and actually checking
whether the standards are describing reality,
or just what they imagine to be the case.
So this is good.
Servo is also allowing us
to increase the standard behavior
in collaboration between browsers.
So you might be thinking,
"Okay, I get why there's benefits
"to creating a brand new browser engine,
"because we can make different choices than Firefox has
"without actually impacting Firefox
"until we've reached the point where we think
"it would actually be beneficial
"to integrate these choices back into Firefox.
"But why do we actually need
"a whole new programming language in order to do it?
"Because that seems like it's biting off
"more than we want to chew."
So Rust is a reaction
to a number of historical issues with Gecko
that are well established over the years,
and that is the constant drive
for getting better performance out of the web browser
while not reducing the safety of our users.
So browsers always talk about the trade offs.
If you wanted to be the most safe,
then you would choose a programming language
where it is impossible to write things
that will cause crashes that can be exploited by hackers.
However, those give you fewer guarantees as a programmer
about the kind of performance you can expect from your code,
and browsers are so restricted in what they're trying to do
that often they need every ounce of control
in order to get the most fluid animations
and get the non stop video and audio.
You want to avoid those jerks
when scrolling up and down the page,
'cause that just breaks the immersion
of being in a webpage.
So all the browsers that are really trying to compete here
are using the language C++,
and that gives you all the control you need,
but it is less safe,
and that means that there is
a very unfortunate trade off here.
So Rust is an attempt to give developers
the performance and control that they need
but without having to give them the threat,
or the risk, of writing unsafe code.
So it should be possible to write only safe code
without sacrificing performance.
That is the guarantee that Rust makes.
So let's talk about what unsafe means in this context.
You may have heard of the term dangling pointers,
and that basically means that you have code,
you have multiple pieces of code that are sharing memory.
They're referring to the same pieces of memory.
So maybe there's two pieces of code
that both have the same string.
So a safe thing to do would be
if a piece of code has their own copy of the string,
and so any time they need to access it
they know that they have this particular copy of the string,
and they can always access it, that's fine.
But, if you want to reduce memory usage,
copying a string can be wasteful,
especially if it's the contents of a gigabyte file.
Suddenly, you have two gigabytes taken up
when you really only want one.
So if you have pieces of code that share the same memory
then you run the risk of
what happens if one part of the code says,
"I am done with this string that is a gigabyte long.
"I am going to get rid of it
"in order to reduce memory usage?"
Because if you don't notify the other piece of code
that that string is no longer available,
suddenly it has a pointer to memory that is invalid,
and that is referred to as dangling.
If it tries to use that memory,
then suddenly that can be the route to being exploited
by someone who has discovered this fact.
So another example of unsafety that we mean
is in terms of using multiple threads, parallel code,
and this is called data races.
This is where you have multiple threads
that are, again, sharing the same memory.
So if they're both reading it, that's okay,
because reading doesn't modify anything.
However, if you have code that's running in parallel
and they both try to manipulate the same memory,
but without having any kind of synchronization
to decide who does it first,
then you run the risk
that they will end up manipulating it incorrectly
and leaving invalid data there.
You can imagine that as one piece of code reads a number
and says, "Okay, I know this number is currently five.
"I'm going to increase it by one.
"It will now be six."
But, if you have another piece of code
doing the same thing at the same time,
they could both read the value five and say,
"Okay, I know that it is now six,
"so I'm gonna update it to say it is now six,"
when really it should be seven
if both of them were attempting to increase it by one
based on the current value.
So that is what not being synchronized means.
Just having an incorrect number doesn't seem that dangerous,
but it's more dangerous
when you have other data that you're modifying,
and you're using that to do other calculations.
Basically, data races are risky,
and unfortunately it's very easy to make mistakes like this
without realizing it,
because they can lie dormant
without being noticed very easily.
Basically, dangling pointers and data races
are two examples of problems
that are often described as not real problems,
because they just require discipline to avoid them.
We have tools that can recognize when they're happening
and can report them to the programmer,
and the programmer can say,
"Oh good, I know that I made this mistake
"and I can now correct that.
"Thank you tools.
"I am a good programmer."
The problem is that these tools,
they don't catch everything.
They don't prevent the mistake from being made.
They only report it if it's encountered after the fact.
This feels like it should be unnecessary,
and it turns out that this is unnecessary,
because the way that we have addressed this problem in Rust
is by making the concept of ownership explicit.
You can imagine if you have a group of people
that are all wanting to use the same coloring book,
and they all have their own marker,
and everyone crowds around the table,
and they all start drawing it at the same time.
If you're lucky, they're all drawing in different parts,
and maybe they're being respectful of each other
and taking turns
so that only one person is drawing at a time.
But, there's always the risk
that someone is going to start drawing
and get really engrossed
and start going over someone else's drawing and ruin it.
This is what I mean by the risk is always here.
You can't actually prevent the bad thing from happening.
However, if that same group only has a single marker,
no one can draw unless they have the marker.
It is impossible to ruin someone's drawing
by being too engrossed and not noticing
that you're drawing into the area
that they are also drawing in.
This is the idea behind Rust.
Rust makes some programs harder to write,
because you cannot express the same things
that you would traditionally express
in other programming languages.
However, it also makes it impossible to write
certain kinds of unsafe programs,
and these are the kinds of problems
which have plagued Firefox
and other browsers for years and years.
The links on this page go to blog posts that...
Sorry, should I be responding to
whoever just said something?
I can click that apparently.
- [Michael] No, you're great, thanks.
- [Havi] No, you're fine.
- Okay, great.
So the links on this page go to blog posts
that explain the concept
of ownership in Rust in more detail
using a very similar analogy.
As well, the other link goes to a blog post
by one of the Rust team members
talking about how Rust enables
what he calls fearless concurrency,
which is referring to writing to parallel code,
the thing that many people talk about
being complicated enough that you shouldn't do it.
You need to be an expert in writing parallel code already,
but that's not the case in Rust,
because the compiler makes it impossible
for you to make mistakes.
So Rust is not just good for addressing
the problem of unsafety.
We're really trying to make it possible
for Rust to be the tool of choice
anytime when you would otherwise choose C++,
when you're saying to yourself,
"I need a certain level of control,
"and a certain level
"of guarantee of performance in my code,"
then it should be possible to choose Rust instead.
So we're making it really easy to reuse existing code,
as well as share code that you've written
so other people can use it.
That's via the crates.io ecosystem
as a website that allows you to search for code
much like the NPM Package Manager,
and the Ruby Bundler,
basically all these ecosystems
where it's really easy to share code.
We've learned from those.
We've brought in the people that helped build them,
and C++ has nothing similar.
This is a very key selling point.
Additionally, Rust releases,
oh shoot, that says six months,
that should say six weeks.
Rust releases a new version every six weeks.
It's just like Firefox in that respect.
There is additionally a process for changing the language
and for proposing ways
that the language could be improved.
That is a public request for comment system
where you put forth the motivation for the change
and what do you believe is required
in order to implement it,
and what the effects would be,
and how this would impact the problem
that you're trying to solve,
as well as what would be required
in order to teach this to new people if we did adopt it.
So there's a very nice process for changing the language,
and for seeing where the language is headed
and getting a sense for the future of the language,
and this is good.
This allows people to have confidence
in where Rust is headed.
Additionally, it's straightforward to write tools
that plug into the compiler
and allow individual projects
to have their own additional changes to the language
or tools to restrict them from writing code
that doesn't belong in their project.
So one of these links goes to a tool called Clippy,
which is basically something that runs
and tells you, "This Rust code
"could be improved in this manner.
"This is inefficient, or this is not idiomatic."
So it allows people to write better Rust code.
Additionally, the second link
goes to a blog post about a really popular project
in the Rust ecosystem,
which makes it very easy to take data in your Rust code
and put it into like a JSON file, for example.
This is through basically extending the language
in well defined ways.
So it allows people to make
really powerful additions to the language
that the core people working on Rust
do not need to spend their time thinking about.
So we've got a very nice separation of concerns here
that allows the community and ecosystem
around the Rust language to really be empowered
while the core Rust team
focuses on the future of the language.
And finally, Rust also makes it really straightforward,
and really empowers you to write higher level
or more expressive forms of code
than people are used to working with
in lower level languages like C++,
and this is through the desire
to encourage zero cost obstructions,
and the link here goes to a blog post from a few months ago
when there was a big new thing that was released
to enable what's called asynchronous input output,
basically allowing programs to be more efficient
with dealing with input
from networks or the user or other things,
where they don't have to just sit there waiting for it.
They can be doing other things at the same time.
And the way that this was designed
it actually looks far more expressive
than people are used to seeing
in low level languages like Rust,
but it doesn't incur any real performance impact,
and this is what Rust is really trying to empower,
allowing people to write the code
that expresses their intent and their desire,
rather than the code that best achieves,
yeah, the code that best achieves
the guarantees they're aiming for.
You shouldn't have to choose between those.
So resources for learning Rust and for getting help.
There's a lot of them.
There's a great community out there,
and we've got a lot of documentation
teaching people how to read and write Rust code.
We also curate blog posts and presentations
and talks and things to help people learn better,
and we've got a hub with more links to other resources.
Additionally, if you're getting stuck
there's lots of sort of asynchronous
and synchronous mechanisms we can use.
There's an IRC channel, of course, because it's Mozilla.
There's also a StackOverflow community,
there's a Reddit community,
and we have a discussion forum
where lots of people ask questions
about the code they're writing and saying,
"I don't understand this error.
"I'm having trouble understanding
"why this isn't working."
And there's very helpful people.
There's lots of people that really enjoy entering questions
and helping other people figure out what's going wrong.
So shifting back, we've talked about what Servo is
and what its future is,
and we've talked about why we're using Rust
and what Rust allows us to do.
So what does it take for people
to actually contribute to Servo
if they're really interested in that?
So contributions can take a variety of forms.
They can be running the nightly builds
that I showed earlier.
They get updated every day.
You just go to download.servo.org
and download the most recent one,
and you try out the websites that you're curious about,
and then, most helpfully, you tell us about things
that are not working correctly,
whether they don't look right,
or whether Servo crashes or something.
We've got a built in crash reporter for some cases
that'll open up a GitHub issue automatically.
But this is very helpful to us,
because this allows us to triage
what people are encountering in the wild
and come across things that we haven't seen before.
Additionally, if you're so inclined
you can help take those reports that other people have made
and say, "Okay, A, can I reproduce this?
"Is this something that is not just local
"to that person's machine?
"And B, can I come up with a series of steps
"that will consistently cause that problem to occur?"
If so, let us know.
Then you can also help us
take the whole contents of webpages
and turn those into minimalized test cases
that don't require actually visiting the website.
If we can just have like a few lines of HTML or something
in a GitHub issue
that will allow us to see the same problem,
that is really great for us.
It allows us to spend much less time
trying to do all the reproduction
and figuring out what the real issue is.
And then, if you have a programming background
and you're interested in writing some Rust code,
then yeah, we have a tracker for the issues,
and we would encourage people to take a look
at the list of issues in our issue tracker
that we have marked as being good for a beginner,
and we link to them from there
and we aggregate across all of our repositories,
and a lot of people say,
"Oh, I'm good at writing Rust,"
or, "I can write Python code or something."
So there's a lot there.
We really try to work hard to encourage people
who are writing their very first Rust code
that it doesn't have to be a daunting process,
but Servo can actually be a very approachable way
to gain your first Rust experience.
Finally, you can also try writing demos
that showcase the advantages that Servo provides.
So I've linked here to one website
which has a bunch of demos in it
from an intern we had last summer
who was just coming up with things
that could really show off
ways that Servo could look better
than other browsers at that time.
So the skills that are required in order to do these.
It can be as simple as just having curiosity,
and being willing to tell us about what's going wrong
in the nightly builds.
For people that really like
burning down problems step by step,
then finding the steps to reproduce
and making the minimized test cases
using your knowledge of HTML
And then if you got prior programming experience
in other languages that you'd like to bring to bear,
then yes, making Rust contributions, Python contributions,
those are great, and we really appreciate them,
and we love helping people through that process.
So the other question is
who should be testing Servo
and should we be encouraging people to test with Servo?
And really the answer right now is
only if they're really excited about Servo
and would like to contribute directly.
We are not encouraging people
to try their content in Servo
and make it work in Servo,
because Servo has a lot of missing features,
a lot of incomplete layout,
a lot of incomplete APIs
and there's things that are gonna break.
And sure, if they want to make it look good in Servo,
they're welcome to it,
but there's no reason to be telling people,
"Oh, you should be making it work in Servo.
"This is the new browser."
Most web developers when they hear about
the fact that we're making a new web browser engine,
they go, "Wait, I need to test in another browser now?"
Really go for the people
that are excited about the prospects of Servo,
rather than aiming for the ones
who are just trying to make it work in all the browsers.
And if you want to keep track
of what's happening with Servo,
we've got a blog, that's blog.servo.org.
We have a Twitter account that gets regular updates.
And then the big announcements,
like the stuff about Quantum,
that's gonna appear on Mozilla Hacks and other blogs.
So depending on the level of interest you have,
you can keep track of what's going on in these ways.
So that's all that I've got,
and I would love to take questions from you now.
- Okay wow, thanks Josh.
I still haven't decided if it's easier,
you'll have to give us some feedback,
whether it's better for a speaker
to have some eyeballs to look at while you're talking,
or whether it's better for the user experience
to just see the speaker and the slides.
So anyway, thank you, that was awesome.
We have some great questions on the Etherpad.
Also, I was going to share the link to your deck
with folks on this call,
but I wanted to make sure that was cool with you
before I dropped that-- - Absolutely.
- Into the channels.
So let me give you, Daniele from Italy
couldn't make this call in real time,
but he was the first one into the Etherpad
with a whole slew of questions,
and let me just start with the top one.
I think I'll wrap together the first two that he has.
So, "What are new web APIs that exist in Servo
"but not Firefox, and when will they be available?"
And, "Are there plans for new APIs in Servo,
"like for Web Serial or Web MIDI?"
He calls out Web Bluetooth as one that he's expecting.
So big question about new APIs in Servo.
- That is a very good question.
So really, Web Bluetooth is an interesting exception.
We're mostly aiming for not introducing,
yeah, we're not really aiming to introduce new APIs in Servo
that are not available to other web content.
We don't want Servo to be a vehicle
for people to see it as,
"This is the future.
"I'm gonna build things that only work in Servo."
That's not the goal here.
Web Bluetooth is a collaboration
with a university in Hungary
whose really, like their area of expertise is in Bluetooth,
so they're really interested in the spec work
that's happening around standardizing Bluetooth on the web.
Servo is an avenue for them
to make a separate implementation
than the one that's in Chrome right now,
and allow them to provide feedback on the standard
that is being developed.
So really I think we're open to having people
implement other APIs that are on the standards track
but not present in other browsers yet,
but we're also gonna be cautious about it.
That being said, we have always talked about
using Servo as a vehicle for prototyping new APIs
that could help drive the web forward,
whether it's making asynchronous versions of existing APIs
that currently are bad for performance.
So things like figuring out the bounding box
of an element on a website
is something that needs to block
waiting for the layout to finish
We're talking about, like we've always talked about
experimenting with a new API
that could avoid having to wait on that,
but mostly we're focusing on
compatibility with existing web content at the moment.
- Cool, thank you.
Okay, we've got a couple of questions
that sort of ask about Rust versus C++ or Go.
So Miguel wants to know,
"Why should I use Rust instead of C++ or Go?"
And Daniele kind of on top of that says,
"How do you move people from Go to Rust?"
So answer those two together anyway you like.
- Those are interesting questions.
Personally, I feel like I gave a bunch of the reasons
why using Rust instead of C++ is a solid choice,
because you get the tooling benefits
that don't exist for C++.
You don't have the risk of writing unsafe code.
You cannot do that.
That can be a huge motivating factor
if you don't have to deal
with debugging these problems in the first place.
In terms of comparisons with Go,
that's actually more interesting,
because Go and Rust, they target different audiences really.
Often if you can use Go,
if you do not have restrictions that prevent you from,
or if you do not have restrictions
that make you need to use a low level language
like C++ or Rust,
then choosing a language like Go
is often a very sensible choice.
It's got a lot of great tooling,
it's got a large community,
it's got a lot of libraries,
and it's often going to be less work than writing Rust code,
but for the times when you have those guarantees
that are required,
for the times when you need to avoid the chance
of writing unsafe code, Rust is a no brainer,
and choosing another language instead of that
when you could choose Rust,
then yeah, I think that we have solid cases
for moving people from Go in those cases.
- Okay, and it's possible that you covered some of this,
but two questions came in from Kade in Australia.
So about the genesis of Rust,
"What was the deciding factor that made people
"want to create a new language?
"How do you go about creating a language from scratch?"
And he goes on a little bit more about security,
the security benefits of the URL parser
and using Rust in general.
I'm gonna separate out
the web app security part of that question
'cause the feel is a little different.
But anyways, so yeah, if you could take a stab
at the genesis of Rust and the security benefits.
- Yeah, of course.
So the genesis of Rust,
it was a side project of a Mozilla employee
back in like 2005, 2007 or something, maybe 2009,
where Graydon Hoare was just interested in
programming language research
and wanted to play around with making his own language
which could do various low level things,
and basically a number of years on it had got to the point
where it could do useful things,
and upper level people at Mozilla,
like Brendan Eich, and Robert O'Callahan, and other people
who were really well versed in building browsers
knew that he was doing this and said,
"This looks like it may have some properties
"that would actually make building browsers better.
"It would allow us to have better security,
"possibly we could have better performance,"
and so that's when Mozilla started investing in Rust
and turning it into a sort of an official Mozilla project
that we would employ people to work on.
As for how you go about creating a language from scratch,
it's sort of a combination of creating,
or coming up with the things
you actually want your language to do.
What do you want it to restrict
and what do you want it to enable?
Figuring out what that looks like in practice,
and then it's a matter
of building some kind of implementation
that can turn these programs into something you can run?
So whether it's building an interpreter,
whether that's building what's called a front end,
it takes in the code in your language,
but then turns that into an existing language,
like C for example,
then you can build on top of other
existing programming language tools
without having to do the full investment there.
That's one way where you can actually play around with it
without doing everything from scratch.
As for the security benefits of the URL parser
and using Rust in general.
So like I said, basically if you build everything in Rust
you have an ecosystem in which it is impossible
to have these problems of data races and dangling pointers
which have plagued browsers.
They are the number one,
Flash and these dangling pointers
are the two biggest causes
of security exploits every single year,
and they're always the cause of the biggest bounties
in the competitions to exploit browsers.
And so if Rust actually makes it impossible
to write code that exposes those problems,
that is a huge benefit,
and especially for code like a URL parser,
where you should only ever be,
like URL parsing should not be
a hot code path in your browser.
It shouldn't be something that is
causing your browser to be slower than it needs to be.
So the fact that you're writing it
in an unsafe language like C++
as is the case in Gecko, that's risky,
because you are exposing yourself
to cases of potential exploits
for something that doesn't need the guarantees
that languages like C++ can provide.
Therefore, building in Rust
we still don't lose the benefits of languages like C++,
but we also gain the safety aspect of a language like Rust
and can write better code.
- Cool, thank you.
Is that your question, the last one?
- Yeah, do we have any others?
- Yeah, let me, so a couple more questions
off the Etherpad.
What are the different...
So Vignesh from India asks,
"What are the different evaluation metrics
"taken into consideration
"for benchmarking Servo with other browser engines?"
- It's a good question.
So clearly there's going to be things
like the time it takes to load a page,
or how many rotating, transforming things
you can have on the page at one time?
Like how efficiently you can do the things
that web developers want to do.
That is a huge part of it.
But we are also going to,
we're going to be looking at some way
of measuring that against security benefits,
which are harder to get a particular number for,
but we're going to be thinking of it in terms of,
"Are we at least as performant as existing browser engines,
"but we're also presumably safer because we do it in Rust,
"or are we like 10% slower
"but we're significantly safer because it's Rust?"
That still might be a net win.
I don't think, like certainly in terms
of the components that we're reintegrating
back into Firefox,
it certainly skews more towards the performance aspect,
where it's going to be a tough sell
if we're not keeping Firefox at least as efficient
as it is today,
because that's one of the things
that Rust is supposed to be getting us.
- Cool, I think you answered Kairo's question,
but I will ask it, because I was splitting my attention.
"to Rust in the future?"
- So there's no plans.
There's hopes and desires,
and there are some people
who are really keen on doing that,
but it's a hard sell,
that would be great.
You would have all the benefits of using Rust,
is a great target for security exploits.
However, it's not the interpreter which is the problem,
it's the just-in-time compilation.
It's when you're generating code on the fly
and executing it later.
That is when you would lose all the benefits
So there's much higher benefits
to rewriting other code in Rust
- Awesome, great, okay.
There's four more minutes,
and I see two more questions in the Etherpad,
or at least two more questions,
two more people with questions.
So Michael wants to know.
- So yeah, one of the questions I had
is we've heard before that Chrome,
although an excellent browser,
eats CPU and eats battery for breakfast.
Do we have evidence that supports
that users are looking for a solution to this problem,
and do we think that providing a better browser
could potentially change some of those user habits?
What are your thoughts there?
- So certainly the evidence I'm aware of is anecdotal
of people complaining about it online.
I mean, Microsoft obviously believes
that it's enough to have users change their habits,
because they are popping up things
on Windows right now saying,
"Did you know that Firefox is half as efficient?"
whatever, and my mother complains about that all the time.
She's very much angry at them in support of me,
but it might be.
Certainly people notice when their phones
start losing battery more rapidly.
I suspect that if we had evidence showing
that we had a browser which could save your battery better
and we really promoted that,
that that would be a great of avenue in which to compete.
- That's when it really addresses that pain point,
and a pain point like it eats my phone's battery,
and it's something that really resonates with people.
- So let's see.
Tech Speaker Amit from Israel wants to know,
"What are other interesting applications of Rust
happening today, apart from a browser engine?"
- That is a very great question.
I would like to direct them to Friends of Rust.
Where is that?
So I will...
Okay, so rust-lang.org/friends.html.
So this is very interesting
This is companies that have told us
that they are running Rust code in production,
and there's often descriptions under those things
which give a quick overview
of what they're actually doing with Rust.
Dropbox has written about how
they replaced some of their core infrastructure with Rust
that was better in some way.
There's people who are doing things with machine learning,
or with embedded computing,
or doing things related to Bitcoin.
There's clearly a lot of people
that are working on interesting things.
In the open source world,
there's people who are experimenting
with writing games in Rust.
I know of a couple that are not even open source,
like they might actually be trying to turn them into a thing
that they could sell later, I'm not sure.
Certainly it's young, and it's difficult to point to
specific things that people are doing
which are interesting to the community
outside of Rust developers.
There's not a ton of individual products out there
that are based on Rust yet,
but we're certainly getting there.
We're building up the infrastructure to be able to really,
like we're building like libraries
that are for building computer games,
and we're building libraries
that are for doing more efficient networking
that other people can base their products on.
So it's getting there.
- Sweet, well there's one minute left,
and a couple people with second round questions.
So let me throw one more at you.
So this is from Kade, as well.
"What does Servo mean for web app security?"
If anything comes to mind.
- I don't have anything in particular to say to that.
I think a lot of web app security
is more related to server configuration
and the choices made in the source of the web application
rather than the source of the browser.
- Okay, another security question, this one from
and then this will be the last.
"Does using the GPU in Web Render or text rendering
"negate security aspects of using Rust?"
- That is a good question.
I have no idea, but my suspicion is no,
otherwise, I don't know,
it seems like, yeah, that seems like
that would be an unfortunate consequence.
Yeah, no, I don't have any experience in that area,
so I will have to defer to other people
who know more about that.
- And note tech speakers
that it's easy to answer questions
that you don't know the answers to
with grace and authority.
Thank you Josh for that one. (laughs)
And that's a great place to close.
Thank you so much for this.
Thanks everybody who is on live.
Thanks for the questions.
I'm stalling 'cause I feel like
there's something important I should say at the end
that I'm forgetting.
This will be transcoded today,
so for folks in your communities
or friends who couldn't make it to the call now,
it will be available later,
and I will, as usual,
send out for a transcript and captioning
so you can share it and watch it with greater attention,
even if English is not your native tongue, so--
- So one quick announcement from Michael.
- Yeah, we have the monthly Tech Speaker's Meeting
taking place this time next week on the 28th,
that is at its normal time, 10:30 am Pacific time,
and of course you guys will have some emails,
and you should already have those invites,
but just a heads up there.
- And our next masterclass is on the second of March,
so the Thursday of next week.
Next week you get two calls with Mozilla.
That will be Catt Small, who is a product designer
and engineer at Etsy.
She will be talking about telling the full story
and how to craft a great narrative in your presentation.
See you all on the internets.
Thanks a lot, thanks again, Josh.
- Thanks Josh. - Have a great week.
- Thanks for having me.
- Alrighty, ciao ciao.