full speed into the name change

This commit is contained in:
Torsten Ruger 2017-01-02 01:45:44 +02:00
parent 930d006417
commit 4893e243f5
17 changed files with 62 additions and 137 deletions

View File

@ -34,7 +34,7 @@ Off course Celluloid needs native threads, so you'll need to run rubinius or jru
a fix for the problem, if we use celluloid. a fix for the problem, if we use celluloid.
But it is a fix, it is not part of the system. The system has sequetial calls per thread and threads. Threads are evil as But it is a fix, it is not part of the system. The system has sequetial calls per thread and threads. Threads are evil as
i explain (rant about?) [here](/salama/threads.html), mainly because of the shared global memory. i explain (rant about?) [here](/rubyx/threads.html), mainly because of the shared global memory.
### Messaging with inboxes ### Messaging with inboxes

1
CNAME
View File

@ -1 +0,0 @@
salama-vm.org

View File

@ -1,7 +1,7 @@
# [Salama webpages](http://salama-vm.org) # [RubyX webpages](http://ruby-x.org)
Salamas webpage is done with github pages: https://help.github.com/categories/20/articles RubyX's webpage is done with github pages: https://help.github.com/categories/20/articles
###Contribute ###Contribute

View File

@ -13,10 +13,10 @@ layout: site
<h3 class="center">More Detail</h2> <h3 class="center">More Detail</h2>
<div> <div>
<ul class="nav nav-list"> <ul class="nav nav-list">
<li><a href="/salama/layers.html"> Layers of Salama </a> </li> <li><a href="/rubyx/layers.html"> Layers of RubyX </a> </li>
<li><a href="/salama/memory.html"> Memory </a> </li> <li><a href="/rubyx/memory.html"> Memory </a> </li>
<li><a href="/salama/threads.html"> Threads </a> </li> <li><a href="/rubyx/threads.html"> Threads </a> </li>
<li><a href="/salama/optimisations.html"> Optimisation ideas </a> </li> <li><a href="/rubyx/optimisations.html"> Optimisation ideas </a> </li>
</ul> </ul>
</div> </div>
</div> </div>

View File

@ -23,7 +23,7 @@
<div class="navbar effect navbar-inverse navbar-fixed-top"> <div class="navbar effect navbar-inverse navbar-fixed-top">
<div class="navbar-inner"> <div class="navbar-inner">
<div class="container"> <div class="container">
<a href="https://github.com/salama/"><img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_orange_ff7600.png" alt="Fork me on GitHub"></a> <a href="https://github.com/ruby-x/"><img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_orange_ff7600.png" alt="Fork me on GitHub"></a>
<a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse" href="#"> <a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse" href="#">
<span class="icon-bar"></span> <span class="icon-bar"></span>
<span class="icon-bar"></span> <span class="icon-bar"></span>
@ -35,7 +35,7 @@
<a href="/index.html">Home</a> <a href="/index.html">Home</a>
</li> </li>
<li class="link4"> <li class="link4">
<a href="/salama/layers.html">Architecture</a> <a href="/rubyx/layers.html">Architecture</a>
</li> </li>
<li class="link6"> <li class="link6">
<a href="/typed/typed.html">Typed layer</a> <a href="/typed/typed.html">Typed layer</a>

View File

@ -22,7 +22,7 @@ little big. It took a little, but then i started.
I fiddled a little with fancy 2 or even 3d representations but couldn't get things to work. I fiddled a little with fancy 2 or even 3d representations but couldn't get things to work.
Also getting used to running ruby in the browser, with opal, took a while. Also getting used to running ruby in the browser, with opal, took a while.
But now there is a [basic frame](https://github.com/salama/salama-debugger) up, But now there is a [basic frame](https://github.com/ruby-x/salama-debugger) up,
and i can see registers swishing around and ideas of what needs and i can see registers swishing around and ideas of what needs
to be visualized and partly even how, are gushing. Off course it's happening in html, to be visualized and partly even how, are gushing. Off course it's happening in html,
but that ok for now. but that ok for now.

View File

@ -29,7 +29,7 @@ possible because a lot of the stuff was there already.
- [Parfait](/typed/parfait.html) was pretty much there. Just consolidated it as it is all just adapter. - [Parfait](/typed/parfait.html) was pretty much there. Just consolidated it as it is all just adapter.
- The [Register abstraction](/typed/debugger.html) (bottom) was there. - The [Register abstraction](/typed/debugger.html) (bottom) was there.
- Using the ast library made things easier. - Using the ast library made things easier.
- A lot of the [parser](https://github.com/salama/salama-reader) could be reused. - A lot of the [parser](https://github.com/ruby-x/salama-reader) could be reused.
And off course the second time around everything is easier (aka hindsight is perfect). And off course the second time around everything is easier (aka hindsight is perfect).

View File

@ -1,7 +0,0 @@
---
layout: site
---
<div style="position: absolute;top: 54px;bottom: 0px;width: 100%;">
<iframe frameborder="0" style="height: 100%;width: 100%;" src="http://dancinglightning.gitbooks.io/the-object-machine/content/"></iframe>
</div>

View File

@ -48,14 +48,13 @@ layout: site
</ul> </ul>
</p> </p>
<p> <p>
The lower level, strongly typed layer is <a href="/typed/typed.html">finished.</a>. The lower level, strongly typed layer is <a href="/typed/typed.html">finished</a>.
While it has well known typed language data semantics, it introduces several new concept: While it has well known typed language data semantics, it introduces several new concept:
<ul> <ul>
<li> Object based memory (no global memory) </li> <li> Object based memory (no global memory) </li>
<li> Multiple return addresses based on type </li>
<li> Multiple implementations per function based on type </li> <li> Multiple implementations per function based on type </li>
<li> Explicit <a href="/2015/06/20/the-static-call-chain.html">message and frame objects</a>(no stack)</li> <li> Object oriented calling semantics (not stack based) </li>
<li> <a href="https://github.com/salama/salama/tree/master/lib/register" target="_blank">Register machine abstraction</a></li> <li> <a href="https://github.com/ruby-x/ruby/tree/master/lib/register" target="_blank">Register machine abstraction</a></li>
<li> Extensible instruction set, with arm implementations <li> Extensible instruction set, with arm implementations
</ul> </ul>
</p> </p>
@ -65,7 +64,7 @@ layout: site
</p> </p>
<p> <p>
There is also an interpreter (mostly for testing) and a basic There is also an interpreter (mostly for testing) and a basic
<a href="https://github.com/salama/salama-debugger"> visual debugger</a> which not only helps <a href="https://github.com/ruby-x/salama-debugger"> visual debugger</a> which not only helps
debugging, but also understanding of the machine. debugging, but also understanding of the machine.
</p> </p>
</div> </div>
@ -73,7 +72,7 @@ layout: site
<div class="span4"> <div class="span4">
<h2 class="center">Docs</h2> <h2 class="center">Docs</h2>
<p> <p>
The short introduction is under the <a href="/salama/layers.html">architecture</a> menu. The short introduction is under the <a href="/rubyx/layers.html">architecture</a> menu.
</p> </p>
<p> <p>
The section on the intermediate rerepresentation is <a href="/typed/typed.html">here</a>. The section on the intermediate rerepresentation is <a href="/typed/typed.html">here</a>.

View File

@ -1,6 +1,6 @@
--- ---
layout: project layout: project
title: Salama, where it started title: RubyX, where it started
--- ---
<div class="row vspace10"> <div class="row vspace10">

View File

@ -1,7 +1,7 @@
--- ---
layout: project layout: project
title: Ruby in Ruby title: Ruby in Ruby
sub-title: Salama hopes make the the mysterious more accessible, shed light in the farthest (ruby) corners, and above all, <b>empower you</b> sub-title: RubyX hopes make the the mysterious more accessible, shed light in the farthest (ruby) corners, and above all, <b>empower you</b>
--- ---
<div class="row vspace20"> <div class="row vspace20">

View File

@ -1,6 +1,6 @@
--- ---
layout: salama layout: rubyx
title: Salama architectural layers title: RubyX architectural layers
--- ---
## Main Layers ## Main Layers
@ -17,7 +17,7 @@ to compile ruby.
In a similar way to the c++ example, we need level between ruby and assembler, as it is too In a similar way to the c++ example, we need level between ruby and assembler, as it is too
big a mental step from ruby to assembler. Off course course one could try to compile to c, but big a mental step from ruby to assembler. Off course course one could try to compile to c, but
since c is not object oriented that would mean dealing with all off c's non oo heritance, like since c is not object oriented that would mean dealing with all off c's non oo heritage, like
linking model, memory model, calling convention etc. linking model, memory model, calling convention etc.
Top down the layers are: Top down the layers are:
@ -107,11 +107,11 @@ In other words the instruction set is extensible (unlike cpu instruction sets).
Basic object oriented concepts are needed already at this level, to be able to generate a whole Basic object oriented concepts are needed already at this level, to be able to generate a whole
self contained system. Ie what an object is, a class, a method etc. This minimal runtime is called self contained system. Ie what an object is, a class, a method etc. This minimal runtime is called
parfait, and the same objects willbe used at runtime and compile time. parfait, and the same objects will be used at runtime and compile time.
Since working with at this low machine level (essentially assembler) is not easy to follow for Since working with at this low machine level (essentially assembler) is not easy to follow for
everyone, an interpreter was created. Later a graphical interface, a kind of everyone, an interpreter was created. Later a graphical interface, a kind of
[visual debugger](https://github.com/salama/salama-debugger) was added. [visual debugger](https://github.com/ruby-x/rubyx-debugger) was added.
Visualizing the control flow and being able to see values updated immediately helped Visualizing the control flow and being able to see values updated immediately helped
tremendously in creating this layer. And the interpreter helps in testing, ie keeping it tremendously in creating this layer. And the interpreter helps in testing, ie keeping it
working in the face of developer change. working in the face of developer change.

View File

@ -1,5 +1,5 @@
--- ---
layout: salama layout: rubyx
title: Types, memory layout and management title: Types, memory layout and management
--- ---
@ -7,7 +7,7 @@ Memory management must be one of the main horrors of computing. That's why garba
### Object and values ### Object and values
As has been mentioned, in a true OO system, object tagging is not really an option. Tagging being the technique of adding the lowest bit as marker to pointers and thus having to shift ints and loosing a bit. Mri does this for Integers but not other value types. We accept this and work with it and just say "off course" , but it's not modelled well. As has been mentioned, in a true OO system, object tagging is not really an option. Tagging being the technique of adding the lowest bit as marker to pointers and thus having to shift ints and loosing a bit. Mri does this for Integers but not other value types. We accept this and work with it and just say "off course" , but it's not modeled well.
Integers are not Objects like "normal" objects. They are Values, on par with ObjectReferences, and have the following distinctive differences: Integers are not Objects like "normal" objects. They are Values, on par with ObjectReferences, and have the following distinctive differences:

View File

@ -1,9 +1,9 @@
--- ---
layout: salama layout: rubyx
title: Optimisation ideas title: Optimisation ideas
--- ---
I won't manage to implement all of these idea in the beginning, so i just jot them down. I won't manage to implement all of these idea in the beginning, so i just jot them down.
### Avoid dynamic lookup ### Avoid dynamic lookup
@ -14,10 +14,10 @@ This off course is a broad topic, which may be seen under the topic of caching.
Ruby has dynamic instance variables, meaning you can add a new one at any time. This is as it should be. Ruby has dynamic instance variables, meaning you can add a new one at any time. This is as it should be.
But this can easily lead to a dictionary/hash type of implementation. As variable "lookup" is probably *the* most But this can easily lead to a dictionary/hash type of implementation. As variable "lookup" is probably *the* most
common thing an OO system does, that leads to bad performance (unneccessarily). common thing an OO system does, that leads to bad performance (unneccessarily).
So instead we keep variables layed out c++ style, continous, array style, at the address of the object. Then we have So instead we keep variables layed out c++ style, continous, array style, at the address of the object. Then we have
to manage that in a dynamic manner. This (as i mentioned [here](memory.html)) is done by the indirection of the Type. A Type is to manage that in a dynamic manner. This (as i mentioned [here](memory.html)) is done by the indirection of the Type. A Type is
a dynamic structure mapping names to indexes (actually implemented as an array too, but the api is hash-like). a dynamic structure mapping names to indexes (actually implemented as an array too, but the api is hash-like).
When a new variable is added, we create a *new* Type and change the Type of the object. We can do this as the Type will When a new variable is added, we create a *new* Type and change the Type of the object. We can do this as the Type will
@ -29,38 +29,38 @@ So, Problem one fixed: instance variable access at O(1)
Off course that helps with Method access. All Methods are at the end variables on some (class) object. But as we can't very well have the same (continuous) index for a given method name on all classes, it has to be looked up. Or does it? Off course that helps with Method access. All Methods are at the end variables on some (class) object. But as we can't very well have the same (continuous) index for a given method name on all classes, it has to be looked up. Or does it?
Well, yes it does, but maybe not more than once: We can conceivably store the result, except off course not in a dynamic Well, yes it does, but maybe not more than once: We can conceivably store the result, except off course not in a dynamic
structure as that would defeat the purpose. structure as that would defeat the purpose.
In fact there could be several caching strategies, possibly for different use cases, possibly determined by actual run-time In fact there could be several caching strategies, possibly for different use cases, possibly determined by actual run-time
measurements, but for now I just destribe a simeple one using Data-Blocks, Plocks. measurements, but for now I just destribe a simeple one using Data-Blocks, Plocks.
So at a call-site, we know the name of the function we want to call, and the object we want to call it on, and so have to So at a call-site, we know the name of the function we want to call, and the object we want to call it on, and so have to
find the actual function object, and by that the actual call address. In abstract terms we want to create a switch with find the actual function object, and by that the actual call address. In abstract terms we want to create a switch with
3 cases and a default. 3 cases and a default.
So the code is something like, if first cache hit, call first cache , .. times three and if not do the dynamic lookup. So the code is something like, if first cache hit, call first cache , .. times three and if not do the dynamic lookup.
The Plock can store those cache hits inside the code. So then we "just" need to get the cache loaded. The Plock can store those cache hits inside the code. So then we "just" need to get the cache loaded.
Initializing the cached values is by normal lazy initialization. Ie we check for nil and if so we do the dynamic lookup, and store the result. Initializing the cached values is by normal lazy initialization. Ie we check for nil and if so we do the dynamic lookup, and store the result.
Remember, we cache Type against function address. Since Types never change, we're done. We could (as hinted above) Remember, we cache Type against function address. Since Types never change, we're done. We could (as hinted above)
do things with counters or robins, but that is for later. do things with counters or robins, but that is for later.
Alas: While Types are constant, darn the ruby, method implementations can actually change! And while it is tempting to Alas: While Types are constant, darn the ruby, method implementations can actually change! And while it is tempting to
just create a new Type for that too, that would mean going through existing objects and changing the Type, nischt gut. just create a new Type for that too, that would mean going through existing objects and changing the Type, nischt gut.
So we need change notifications, so when we cache, we must register a change listener and update the generated function, So we need change notifications, so when we cache, we must register a change listener and update the generated function,
or at least nullify it. or at least nullify it.
### Inlining ### Inlining
Ok, this may not need too much explanation. Just work. It may be intersting to experiment how much this saves, and how much Ok, this may not need too much explanation. Just work. It may be intersting to experiment how much this saves, and how much
inlining is useful. I could imagine at some point it's the register shuffling that determines the effort, not the inlining is useful. I could imagine at some point it's the register shuffling that determines the effort, not the
actual call. actual call.
Again the key is the update notifications when some of the inlined functions have changed. Again the key is the update notifications when some of the inlined functions have changed.
And it is important to code the functions so that they have a single exit point, otherwise it gets messy. Up to now this And it is important to code the functions so that they have a single exit point, otherwise it gets messy. Up to now this
was quite simple, but then blocks and exceptions are undone. was quite simple, but then blocks and exceptions are undone.
### Register negotiation ### Register negotiation
@ -70,16 +70,15 @@ This is a little less baked, but it comes from the same idea as inlining. As cal
More precisely, usually calling conventions have registers in which arguments are passed. And to call an "unknown", ie any function, some kind of convention is neccessary. More precisely, usually calling conventions have registers in which arguments are passed. And to call an "unknown", ie any function, some kind of convention is neccessary.
But on "cached" functions, where the function is know, it is possible to do something else. And since we have the source But on "cached" functions, where the function is know, it is possible to do something else. And since we have the source
(ast) of the function around, we can do things previouly imposible. (ast) of the function around, we can do things previouly imposible.
One such thing may be to recompile the function to acccept arguments exactly where they are in the calling function. Well, now that it's written down. it does sound a lot like inlining, except without the inlining:-) One such thing may be to recompile the function to acccept arguments exactly where they are in the calling function. Well, now that it's written down. it does sound a lot like inlining, except without the inlining:-)
An expansion if this idea would be to have a Negotiator on every function call. Meaning that the calling function would not An expansion if this idea would be to have a Negotiator on every function call. Meaning that the calling function would not
do any shuffling, but instead call a Negotiator, and the Negotiator does the shuffling and calling of the function. do any shuffling, but instead call a Negotiator, and the Negotiator does the shuffling and calling of the function.
This only really makes sense if the register shuffling information is encoded in the Negotiator object (and does not have This only really makes sense if the register shuffling information is encoded in the Negotiator object (and does not have
to be passed). to be passed).
Negotiators could do some counting and do the recompiling when it seems worth it. The Negotiator would remove itself from Negotiators could do some counting and do the recompiling when it seems worth it. The Negotiator would remove itself from
the chain and connect called and new receiver directly. How much is in this i couldn't say though. the chain and connect called and new receiver directly. How much is in this i couldn't say though.

View File

@ -1,5 +1,5 @@
--- ---
layout: salama layout: rubyx
title: Threads are broken title: Threads are broken
author: Torsten author: Torsten
--- ---
@ -9,24 +9,24 @@ i am not sure yet. But good to get it out on paper as a basis for communication.
### Processes ### Processes
I find it helps to consider why we have threads. Before threads, unix had only processes and ipc, I find it helps to consider why we have threads. Before threads, unix had only processes and ipc,
so inter-process-communication. so inter-process-communication.
Processes were a good idea, keeping each programm save from the mistakes of others by restricting access to the processes Processes were a good idea, keeping each programm save from the mistakes of others by restricting access to the processes
own memory. Each process had the view of "owning" the machine, being alone on the machine as it were. Each a small turing/ own memory. Each process had the view of "owning" the machine, being alone on the machine as it were. Each a small turing/
von neumann machine. von neumann machine.
But one had to wait for io, the network and so it was difficult, or even impossible to get one process to use the machine But one had to wait for io, the network and so it was difficult, or even impossible to get one process to use the machine
to the hilt. to the hilt.
IPC mechnisms were and are sockets, shared memory regions, files, each with their own sets of strengths, weaknesses and IPC mechnisms were and are sockets, shared memory regions, files, each with their own sets of strengths, weaknesses and
api's, all deemed complicated and slow. Each switch encurs a process switch and processes are not lightweight structures. api's, all deemed complicated and slow. Each switch encurs a process switch and processes are not lightweight structures.
### Thread ### Thread
And so threads were born as a lightweight mechanisms of getting more things done. Concurrently, because when the one And so threads were born as a lightweight mechanisms of getting more things done. Concurrently, because when the one
thread is in a kernel call, it is suspended. thread is in a kernel call, it is suspended.
#### Green or fibre #### Green or fibre
The first threads that people did without kernel support, were quickly found not to solve the problem so well. Because as any The first threads that people did without kernel support, were quickly found not to solve the problem so well. Because as any
@ -37,17 +37,17 @@ we find that the different viewpoint can help to express some solutions more nat
#### Kernel threads #### Kernel threads
The real solution, where the kernel knows about threads and does the scheduling, took some while to become standard and The real solution, where the kernel knows about threads and does the scheduling, took some while to become standard and
makes processes more complicated a fair degree. Luckily we don't code kernels and don't have to worry. makes processes more complicated a fair degree. Luckily we don't code kernels and don't have to worry.
But we do have to deal with the issues that come up. The isse is off course data corruption. I don't even want to go into But we do have to deal with the issues that come up. The isse is off course data corruption. I don't even want to go into
how to fix this, or the different ways that have been introduced, because the main thrust becomes clear in the next chapter: how to fix this, or the different ways that have been introduced, because the main thrust becomes clear in the next chapter:
### Broken model ### Broken model
My main point about threads is that they are one of the worse hacks, especially in a c environemnt. Processes had a good My main point about threads is that they are one of the worse hacks, especially in a c environemnt. Processes had a good
model of a programm with a global memory. The equivalent of threads would have been shared memory with **many** programs model of a programm with a global memory. The equivalent of threads would have been shared memory with **many** programs
connected. A nightmare. It even breaks that old turing idea and so it is very difficult to reason about what goes on in a connected. A nightmare. It even breaks that old turing idea and so it is very difficult to reason about what goes on in a
multi threaded program, and the only ways this is achieved is by developing a more restrictive model. multi threaded program, and the only ways this is achieved is by developing a more restrictive model.
In essence the thread memory model is broken. Ideally i would not like to implement it, or if implemented, at least fix it In essence the thread memory model is broken. Ideally i would not like to implement it, or if implemented, at least fix it
@ -57,23 +57,22 @@ But what is the fix? It is in essence what the process model was, ie each thread
### Thread memory ### Thread memory
In OO it is possible to fix the thread model, just because we have no global memory access. In effect the memory model In OO it is possible to fix the thread model, just because we have no global memory access. In effect the memory model
must be inverted: instead of almost all memory being shared by all threads and each thread having a small thread local must be inverted: instead of almost all memory being shared by all threads and each thread having a small thread local
storage, threads must have mostly thread specific data and a small amount of shared resources. storage, threads must have mostly thread specific data and a small amount of shared resources.
A thread would thus work as a process used. In essence it can update any data it sees without restrictions. It must A thread would thus work as a process used. In essence it can update any data it sees without restrictions. It must
exchange data with other threads through specified global objects, that take the role of what ipc used to be. exchange data with other threads through specified global objects, that take the role of what ipc used to be.
In an oo system this can be enforced by strict pass-by-value over thread borders. In an oo system this can be enforced by strict pass-by-value over thread borders.
The itc (inter thread communication) objects are the only ones that need current thread synchronization techniques. The itc (inter thread communication) objects are the only ones that need current thread synchronization techniques.
The one mechanism that could cover all needs could be a simple lists. The one mechanism that could cover all needs could be a simple lists.
### Salama ### RubyX
The original problem of what a program does during a kernel call could be solved by a very small number of kernel threads. The original problem of what a program does during a kernel call could be solved by a very small number of kernel threads.
Any kernel call would be listed and "c" threads would pick them up to execute them and return the result. Any kernel call would be listed and "c" threads would pick them up to execute them and return the result.
All other threads could be managed as green threads. Threads may not share objects, other than a small number of system All other threads could be managed as green threads. Threads may not share objects, other than a small number of system
provided. provided.

View File

@ -3,7 +3,7 @@ layout: typed
title: Register Level Debugger / simulator title: Register Level Debugger / simulator
--- ---
![Debugger](https://raw.githubusercontent.com/salama/salama-debugger/master/static/debugger.png) ![Debugger](https://raw.githubusercontent.com/rubyx/salama-debugger/master/static/debugger.png)
## Views ## Views
@ -30,8 +30,8 @@ over a name to look at the class and it's instance variables (recursively)
### Source View ### Source View
Next is a view of the Soml source. The Source is reconstructed from the ast as html. Next is a view of the Soml source. The Source is reconstructed from the ast as html.
Soml (Salama object machine language) is is a statically typed language, Soml (RubyX object machine language) is is a statically typed language,
maybe in spirit close to c++ (without the c). In the future Salama will compile ruby to soml. maybe in spirit close to c++ (without the c). In the future RubyX will compile ruby to soml.
While stepping through the code, those parts of the code that are active get highlighted in blue. While stepping through the code, those parts of the code that are active get highlighted in blue.
@ -43,7 +43,7 @@ Each step will show progress on the register level though (next view)
### Register Instruction view ### Register Instruction view
Salama defines a register machine level which is quite close to the arm machine, but with more RubyX defines a register machine level which is quite close to the arm machine, but with more
sensible names. It has 16 registers (below) and an instruction set that is useful for Soml. sensible names. It has 16 registers (below) and an instruction set that is useful for Soml.
Data movement related instruction implement an indexed get and set. There is also Constant load and Data movement related instruction implement an indexed get and set. There is also Constant load and

View File

@ -1,64 +0,0 @@
---
layout: site
title: Salama and Ruby, Ruby and Salama
---
<div class="content">
<div class="container theme">
<div class="row vspace30">
<div class="span2 center">
</div>
<div class="span4 center">
<h3><span>The three Rubies</span></h3>
</div>
<div class="span4 center">
<h3><span>and Salama</span></h3>
</div>
</div>
<div class="row vspace10">
<div class="span4">
<h4>Syntax</h4>
<h5>and meaning</h5>
<blockquote><p> Pure OO, blocks, closures,clean syntax, simple but consistant, open classes<br/></p></blockquote>
<p> Just to name a few of the great features of the ruby syntax and it's programming model. <br/>
Syntax is an abstract thing, as far as i know there is no ebnf or similar definition of it.
Also as far as i know there is only the mri which is considered the only source of how ruby works. <br/>
With more vm's appearing this is changing and the mpsec is apparently catching up. <br/>
As we are just starting we focus on oo consistency and implement only essential features.
</p>
</div>
<div class="span4">
<h4>Vm</h4>
<h5>Salama</h5>
<blockquote><p> The heart of the salama project is salama, the virtual machine <br /></p></blockquote>
<p>Salama is written in 100% ruby</p>
<p>Salama uses an existing ruby to bootstrap itself</p>
<p>Salama generates native code, and ( with 1+2) creates a native ruby virtual machine. </p>
<p>Salama does not interpret, it parses and compiles (just making sure that's clear)</p>
<p>Salama uses a statically typed value based core with rtti and oo syntax to achieve this
(think c++ with ruby syntax)</p>
</div>
<div class="span4">
<h4>Core Library </h4>
<h5>Parfait</h5>
<blockquote><p> Ruby has core and std lib, with a slightly unclear distinction.
Parfait is a minimalistic core library on which this could be built.
</p></blockquote>
<p>
Stdlib, as Libc , have grown over the decades to provide overlapping and sometimes inconsistant features, most
of which can and should be outside such a standard component.
</p>
<p> Salama considers only that core which can not be suplied though an external gem, this is called
Parfait. It only provides Array and String and an ability to access
the operating system, in 100% ruby.</p>
<p>Full ruby stdlib compliance is not an initial project goal, but may be achieved through external libraries</p>
</div>
</div>
</div>
</div>