diff --git a/project/motivation.html b/project/motivation.html
index 9f393d0..de53826 100755
--- a/project/motivation.html
+++ b/project/motivation.html
@@ -1,7 +1,7 @@
---
layout: project
title: Ruby in Ruby
-sub-title: Salama hopes make the the mysterious more accessible, shed light in the farthest (ruby) corners, and above all,
+sub-title: RubyX hopes make the the mysterious more accessible, shed light in the farthest (ruby) corners, and above all,
diff --git a/salama/layers.md b/rubyx/layers.md
similarity index 96%
rename from salama/layers.md
rename to rubyx/layers.md
index 59a6f9f..49ef73d 100644
--- a/salama/layers.md
+++ b/rubyx/layers.md
@@ -1,6 +1,6 @@
---
-layout: salama
-title: Salama architectural layers
+layout: rubyx
+title: RubyX architectural layers
---
## Main Layers
@@ -17,7 +17,7 @@ to compile ruby.
In a similar way to the c++ example, we need level between ruby and assembler, as it is too
big a mental step from ruby to assembler. Off course course one could try to compile to c, but
-since c is not object oriented that would mean dealing with all off c's non oo heritance, like
+since c is not object oriented that would mean dealing with all off c's non oo heritage, like
linking model, memory model, calling convention etc.
Top down the layers are:
@@ -107,11 +107,11 @@ In other words the instruction set is extensible (unlike cpu instruction sets).
Basic object oriented concepts are needed already at this level, to be able to generate a whole
self contained system. Ie what an object is, a class, a method etc. This minimal runtime is called
-parfait, and the same objects willbe used at runtime and compile time.
+parfait, and the same objects will be used at runtime and compile time.
Since working with at this low machine level (essentially assembler) is not easy to follow for
everyone, an interpreter was created. Later a graphical interface, a kind of
-[visual debugger](https://github.com/salama/salama-debugger) was added.
+[visual debugger](https://github.com/ruby-x/rubyx-debugger) was added.
Visualizing the control flow and being able to see values updated immediately helped
tremendously in creating this layer. And the interpreter helps in testing, ie keeping it
working in the face of developer change.
diff --git a/salama/memory.md b/rubyx/memory.md
similarity index 98%
rename from salama/memory.md
rename to rubyx/memory.md
index f57a19b..340bfab 100644
--- a/salama/memory.md
+++ b/rubyx/memory.md
@@ -1,5 +1,5 @@
---
-layout: salama
+layout: rubyx
title: Types, memory layout and management
---
@@ -7,7 +7,7 @@ Memory management must be one of the main horrors of computing. That's why garba
### Object and values
-As has been mentioned, in a true OO system, object tagging is not really an option. Tagging being the technique of adding the lowest bit as marker to pointers and thus having to shift ints and loosing a bit. Mri does this for Integers but not other value types. We accept this and work with it and just say "off course" , but it's not modelled well.
+As has been mentioned, in a true OO system, object tagging is not really an option. Tagging being the technique of adding the lowest bit as marker to pointers and thus having to shift ints and loosing a bit. Mri does this for Integers but not other value types. We accept this and work with it and just say "off course" , but it's not modeled well.
Integers are not Objects like "normal" objects. They are Values, on par with ObjectReferences, and have the following distinctive differences:
diff --git a/salama/optimisations.md b/rubyx/optimisations.md
similarity index 82%
rename from salama/optimisations.md
rename to rubyx/optimisations.md
index c1f4726..08cacd1 100644
--- a/salama/optimisations.md
+++ b/rubyx/optimisations.md
@@ -1,9 +1,9 @@
---
-layout: salama
+layout: rubyx
title: Optimisation ideas
---
-I won't manage to implement all of these idea in the beginning, so i just jot them down.
+I won't manage to implement all of these idea in the beginning, so i just jot them down.
### Avoid dynamic lookup
@@ -14,10 +14,10 @@ This off course is a broad topic, which may be seen under the topic of caching.
Ruby has dynamic instance variables, meaning you can add a new one at any time. This is as it should be.
But this can easily lead to a dictionary/hash type of implementation. As variable "lookup" is probably *the* most
-common thing an OO system does, that leads to bad performance (unneccessarily).
+common thing an OO system does, that leads to bad performance (unneccessarily).
-So instead we keep variables layed out c++ style, continous, array style, at the address of the object. Then we have
-to manage that in a dynamic manner. This (as i mentioned [here](memory.html)) is done by the indirection of the Type. A Type is
+So instead we keep variables layed out c++ style, continous, array style, at the address of the object. Then we have
+to manage that in a dynamic manner. This (as i mentioned [here](memory.html)) is done by the indirection of the Type. A Type is
a dynamic structure mapping names to indexes (actually implemented as an array too, but the api is hash-like).
When a new variable is added, we create a *new* Type and change the Type of the object. We can do this as the Type will
@@ -29,38 +29,38 @@ So, Problem one fixed: instance variable access at O(1)
Off course that helps with Method access. All Methods are at the end variables on some (class) object. But as we can't very well have the same (continuous) index for a given method name on all classes, it has to be looked up. Or does it?
-Well, yes it does, but maybe not more than once: We can conceivably store the result, except off course not in a dynamic
+Well, yes it does, but maybe not more than once: We can conceivably store the result, except off course not in a dynamic
structure as that would defeat the purpose.
In fact there could be several caching strategies, possibly for different use cases, possibly determined by actual run-time
measurements, but for now I just destribe a simeple one using Data-Blocks, Plocks.
-So at a call-site, we know the name of the function we want to call, and the object we want to call it on, and so have to
-find the actual function object, and by that the actual call address. In abstract terms we want to create a switch with
+So at a call-site, we know the name of the function we want to call, and the object we want to call it on, and so have to
+find the actual function object, and by that the actual call address. In abstract terms we want to create a switch with
3 cases and a default.
-So the code is something like, if first cache hit, call first cache , .. times three and if not do the dynamic lookup.
+So the code is something like, if first cache hit, call first cache , .. times three and if not do the dynamic lookup.
The Plock can store those cache hits inside the code. So then we "just" need to get the cache loaded.
-Initializing the cached values is by normal lazy initialization. Ie we check for nil and if so we do the dynamic lookup, and store the result.
+Initializing the cached values is by normal lazy initialization. Ie we check for nil and if so we do the dynamic lookup, and store the result.
-Remember, we cache Type against function address. Since Types never change, we're done. We could (as hinted above)
+Remember, we cache Type against function address. Since Types never change, we're done. We could (as hinted above)
do things with counters or robins, but that is for later.
-Alas: While Types are constant, darn the ruby, method implementations can actually change! And while it is tempting to
+Alas: While Types are constant, darn the ruby, method implementations can actually change! And while it is tempting to
just create a new Type for that too, that would mean going through existing objects and changing the Type, nischt gut.
So we need change notifications, so when we cache, we must register a change listener and update the generated function,
or at least nullify it.
### Inlining
-Ok, this may not need too much explanation. Just work. It may be intersting to experiment how much this saves, and how much
-inlining is useful. I could imagine at some point it's the register shuffling that determines the effort, not the
+Ok, this may not need too much explanation. Just work. It may be intersting to experiment how much this saves, and how much
+inlining is useful. I could imagine at some point it's the register shuffling that determines the effort, not the
actual call.
-Again the key is the update notifications when some of the inlined functions have changed.
+Again the key is the update notifications when some of the inlined functions have changed.
-And it is important to code the functions so that they have a single exit point, otherwise it gets messy. Up to now this
+And it is important to code the functions so that they have a single exit point, otherwise it gets messy. Up to now this
was quite simple, but then blocks and exceptions are undone.
### Register negotiation
@@ -70,16 +70,15 @@ This is a little less baked, but it comes from the same idea as inlining. As cal
More precisely, usually calling conventions have registers in which arguments are passed. And to call an "unknown", ie any function, some kind of convention is neccessary.
-But on "cached" functions, where the function is know, it is possible to do something else. And since we have the source
+But on "cached" functions, where the function is know, it is possible to do something else. And since we have the source
(ast) of the function around, we can do things previouly imposible.
One such thing may be to recompile the function to acccept arguments exactly where they are in the calling function. Well, now that it's written down. it does sound a lot like inlining, except without the inlining:-)
-An expansion if this idea would be to have a Negotiator on every function call. Meaning that the calling function would not
+An expansion if this idea would be to have a Negotiator on every function call. Meaning that the calling function would not
do any shuffling, but instead call a Negotiator, and the Negotiator does the shuffling and calling of the function.
This only really makes sense if the register shuffling information is encoded in the Negotiator object (and does not have
to be passed).
-Negotiators could do some counting and do the recompiling when it seems worth it. The Negotiator would remove itself from
+Negotiators could do some counting and do the recompiling when it seems worth it. The Negotiator would remove itself from
the chain and connect called and new receiver directly. How much is in this i couldn't say though.
-
\ No newline at end of file
diff --git a/salama/threads.md b/rubyx/threads.md
similarity index 86%
rename from salama/threads.md
rename to rubyx/threads.md
index 870371d..90a9ecb 100644
--- a/salama/threads.md
+++ b/rubyx/threads.md
@@ -1,5 +1,5 @@
---
-layout: salama
+layout: rubyx
title: Threads are broken
author: Torsten
---
@@ -9,24 +9,24 @@ i am not sure yet. But good to get it out on paper as a basis for communication.
### Processes
-I find it helps to consider why we have threads. Before threads, unix had only processes and ipc,
+I find it helps to consider why we have threads. Before threads, unix had only processes and ipc,
so inter-process-communication.
Processes were a good idea, keeping each programm save from the mistakes of others by restricting access to the processes
own memory. Each process had the view of "owning" the machine, being alone on the machine as it were. Each a small turing/
von neumann machine.
-But one had to wait for io, the network and so it was difficult, or even impossible to get one process to use the machine
+But one had to wait for io, the network and so it was difficult, or even impossible to get one process to use the machine
to the hilt.
-IPC mechnisms were and are sockets, shared memory regions, files, each with their own sets of strengths, weaknesses and
+IPC mechnisms were and are sockets, shared memory regions, files, each with their own sets of strengths, weaknesses and
api's, all deemed complicated and slow. Each switch encurs a process switch and processes are not lightweight structures.
### Thread
-
+
And so threads were born as a lightweight mechanisms of getting more things done. Concurrently, because when the one
thread is in a kernel call, it is suspended.
-
+
#### Green or fibre
The first threads that people did without kernel support, were quickly found not to solve the problem so well. Because as any
@@ -37,17 +37,17 @@ we find that the different viewpoint can help to express some solutions more nat
#### Kernel threads
-The real solution, where the kernel knows about threads and does the scheduling, took some while to become standard and
+The real solution, where the kernel knows about threads and does the scheduling, took some while to become standard and
makes processes more complicated a fair degree. Luckily we don't code kernels and don't have to worry.
-But we do have to deal with the issues that come up. The isse is off course data corruption. I don't even want to go into
+But we do have to deal with the issues that come up. The isse is off course data corruption. I don't even want to go into
how to fix this, or the different ways that have been introduced, because the main thrust becomes clear in the next chapter:
### Broken model
My main point about threads is that they are one of the worse hacks, especially in a c environemnt. Processes had a good
model of a programm with a global memory. The equivalent of threads would have been shared memory with **many** programs
-connected. A nightmare. It even breaks that old turing idea and so it is very difficult to reason about what goes on in a
+connected. A nightmare. It even breaks that old turing idea and so it is very difficult to reason about what goes on in a
multi threaded program, and the only ways this is achieved is by developing a more restrictive model.
In essence the thread memory model is broken. Ideally i would not like to implement it, or if implemented, at least fix it
@@ -57,23 +57,22 @@ But what is the fix? It is in essence what the process model was, ie each thread
### Thread memory
-In OO it is possible to fix the thread model, just because we have no global memory access. In effect the memory model
-must be inverted: instead of almost all memory being shared by all threads and each thread having a small thread local
+In OO it is possible to fix the thread model, just because we have no global memory access. In effect the memory model
+must be inverted: instead of almost all memory being shared by all threads and each thread having a small thread local
storage, threads must have mostly thread specific data and a small amount of shared resources.
-A thread would thus work as a process used. In essence it can update any data it sees without restrictions. It must
+A thread would thus work as a process used. In essence it can update any data it sees without restrictions. It must
exchange data with other threads through specified global objects, that take the role of what ipc used to be.
-In an oo system this can be enforced by strict pass-by-value over thread borders.
+In an oo system this can be enforced by strict pass-by-value over thread borders.
The itc (inter thread communication) objects are the only ones that need current thread synchronization techniques.
The one mechanism that could cover all needs could be a simple lists.
-### Salama
+### RubyX
The original problem of what a program does during a kernel call could be solved by a very small number of kernel threads.
Any kernel call would be listed and "c" threads would pick them up to execute them and return the result.
All other threads could be managed as green threads. Threads may not share objects, other than a small number of system
provided.
-
diff --git a/typed/debugger.md b/typed/debugger.md
index 99bd14d..2050d23 100644
--- a/typed/debugger.md
+++ b/typed/debugger.md
@@ -3,7 +3,7 @@ layout: typed
title: Register Level Debugger / simulator
---
-![Debugger](https://raw.githubusercontent.com/salama/salama-debugger/master/static/debugger.png)
+![Debugger](https://raw.githubusercontent.com/rubyx/salama-debugger/master/static/debugger.png)
## Views
@@ -30,8 +30,8 @@ over a name to look at the class and it's instance variables (recursively)
### Source View
Next is a view of the Soml source. The Source is reconstructed from the ast as html.
-Soml (Salama object machine language) is is a statically typed language,
-maybe in spirit close to c++ (without the c). In the future Salama will compile ruby to soml.
+Soml (RubyX object machine language) is is a statically typed language,
+maybe in spirit close to c++ (without the c). In the future RubyX will compile ruby to soml.
While stepping through the code, those parts of the code that are active get highlighted in blue.
@@ -43,7 +43,7 @@ Each step will show progress on the register level though (next view)
### Register Instruction view
-Salama defines a register machine level which is quite close to the arm machine, but with more
+RubyX defines a register machine level which is quite close to the arm machine, but with more
sensible names. It has 16 registers (below) and an instruction set that is useful for Soml.
Data movement related instruction implement an indexed get and set. There is also Constant load and
diff --git a/what_is.html b/what_is.html
deleted file mode 100644
index 7f3bb83..0000000
--- a/what_is.html
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: site
-title: Salama and Ruby, Ruby and Salama
----
-
-
-
-
-
-
-
-
-
The three Rubies
-
-
-
and Salama
-
-
-
-
-
-
Syntax
-
and meaning
-
Pure OO, blocks, closures,clean syntax, simple but consistant, open classes
-
Just to name a few of the great features of the ruby syntax and it's programming model.
- Syntax is an abstract thing, as far as i know there is no ebnf or similar definition of it.
- Also as far as i know there is only the mri which is considered the only source of how ruby works.
- With more vm's appearing this is changing and the mpsec is apparently catching up.
- As we are just starting we focus on oo consistency and implement only essential features.
-
-
-
-
-
Vm
-
Salama
-
The heart of the salama project is salama, the virtual machine
-
Salama is written in 100% ruby
-
Salama uses an existing ruby to bootstrap itself
-
Salama generates native code, and ( with 1+2) creates a native ruby virtual machine.
-
Salama does not interpret, it parses and compiles (just making sure that's clear)
-
Salama uses a statically typed value based core with rtti and oo syntax to achieve this
- (think c++ with ruby syntax)
-
-
-
-
-
Core Library
-
Parfait
-
Ruby has core and std lib, with a slightly unclear distinction.
- Parfait is a minimalistic core library on which this could be built.
-
-
- Stdlib, as Libc , have grown over the decades to provide overlapping and sometimes inconsistant features, most
- of which can and should be outside such a standard component.
-
-
Salama considers only that core which can not be suplied though an external gem, this is called
- Parfait. It only provides Array and String and an ability to access
- the operating system, in 100% ruby.
-
Full ruby stdlib compliance is not an initial project goal, but may be achieved through external libraries
-
-
-
-
-