summaryrefslogtreecommitdiff
path: root/erts/emulator/internal_doc
diff options
context:
space:
mode:
authorAndrew Dryga <andrew@dryga.com>2017-02-14 11:30:41 +0200
committerAndrew Dryga <andrew@dryga.com>2017-02-14 11:31:29 +0200
commit7c06ca6231b812965305522284dd9f2653ced98d (patch)
tree938476941b4275cdeef2e7824d9596ccdd47bee4 /erts/emulator/internal_doc
parentf3624a5a6357f2ebbdaad8785ea0f259bedd64bc (diff)
downloaderlang-7c06ca6231b812965305522284dd9f2653ced98d.tar.gz
Fixed typos in erts
Diffstat (limited to 'erts/emulator/internal_doc')
-rw-r--r--erts/emulator/internal_doc/DelayedDealloc.md18
-rw-r--r--erts/emulator/internal_doc/PortSignals.md2
-rw-r--r--erts/emulator/internal_doc/SuperCarrier.md2
-rw-r--r--erts/emulator/internal_doc/ThreadProgress.md8
-rw-r--r--erts/emulator/internal_doc/Tracing.md2
5 files changed, 16 insertions, 16 deletions
diff --git a/erts/emulator/internal_doc/DelayedDealloc.md b/erts/emulator/internal_doc/DelayedDealloc.md
index b7d87b839f..4b7c774141 100644
--- a/erts/emulator/internal_doc/DelayedDealloc.md
+++ b/erts/emulator/internal_doc/DelayedDealloc.md
@@ -19,7 +19,7 @@ the Erlang VM where memory allocation/deallocation is frequent and
references to memory also are passed around between threads this
solution will also scale poorly due to lock contention.
-Functionality Used to Adress This problem
+Functionality Used to Address This problem
-----------------------------------------
In order to reduce contention due to locking of allocator instances we
@@ -44,12 +44,12 @@ deallocation.
The "message box" is implemented using a lock free single linked list
through the memory blocks to deallocate. The order of the elements in
this list is not important. Insertion of new free blocks will be made
-somewhere near the end of this list. Requirering that the new blocks
+somewhere near the end of this list. Requiring that the new blocks
need to be inserted at the end would cause unnecessary contention when
large amount of memory blocks are inserted simultaneous by multiple
threads.
-The data structure refering to this single linked list cover two cache
+The data structure referring to this single linked list cover two cache
lines. One cache line containing information about the head of the
list, and one cache line containing information about the tail of the
list. This in order to reduce cache line ping ponging of this data
@@ -65,21 +65,21 @@ list. In the uncontended case it will point to the end of the list,
but when simultaneous insert operations are performed it will point to
something near the end of the list.
-When insterting an element one will try to write a pointer to the new
+When inserting an element one will try to write a pointer to the new
element in the next pointer of the element pointed to by the last
pointer. This is done using an atomic compare and swap that expects
-the next pointer to be `NULL`. If this succeds the thread performing
+the next pointer to be `NULL`. If this succeeds the thread performing
this operation moves the last pointer to point to the newly inserted
element.
If the atomic compare and swap described above failed, the last
pointer didn't point to the last element. In this case we need to
-insert the new element somewhere inbetween the element that the last
+insert the new element somewhere between the element that the last
pointer pointed to and the actual last element. If we do it this way
the last pointer will eventually end up at the last element when
threads stop adding new elements. When trying to insert somewhere near
the end and failing to do so, the inserting thread sometimes moves to
-the next element and somtimes tries with the same element again. This
+the next element and sometimes tries with the same element again. This
in order to spread the inserted elements during heavy contention. That
is, we try to spread the modifications of memory to different
locations instead of letting all threads continue to try to modify the
@@ -87,7 +87,7 @@ same location in memory.
### Head ###
-The head contains pointers to begining of the list (`head.first`), and
+The head contains pointers to beginning of the list (`head.first`), and
to the first block which other threads may refer to
(`head.unref_end`). Blocks between these pointers are only refered to
by the head part of the data structure which is only used by the
@@ -142,7 +142,7 @@ contains this "marker" element.
### Contention ###
-When elements are continously inserted by threads not owning the
+When elements are continuously inserted by threads not owning the
allocator instance, the thread owning the allocator instance will be
able to work more or less undisturbed by other threads at the head end
of the list. At the tail end large amounts of simultaneous inserts may
diff --git a/erts/emulator/internal_doc/PortSignals.md b/erts/emulator/internal_doc/PortSignals.md
index b1afb7c5cb..8782ae4e17 100644
--- a/erts/emulator/internal_doc/PortSignals.md
+++ b/erts/emulator/internal_doc/PortSignals.md
@@ -204,7 +204,7 @@ high limit is 8 KB and the low limit is 4 KB.
Previously all operations sending signals to ports began by acquiring
the port lock, then performed preparations for sending the signal, and
-then finaly sent the signal. The preparations typically included
+then finally sent the signal. The preparations typically included
inspecting the state of the port, and preparing the data to pass along
with the signal. The preparation of data is frequently quite time
consuming, and did not really depend on the port. That is we would
diff --git a/erts/emulator/internal_doc/SuperCarrier.md b/erts/emulator/internal_doc/SuperCarrier.md
index 0ad6af41de..acf722ea37 100644
--- a/erts/emulator/internal_doc/SuperCarrier.md
+++ b/erts/emulator/internal_doc/SuperCarrier.md
@@ -151,7 +151,7 @@ To find the smallest free segment that will satisfy a carrier allocation
size (`stree`). We search in this tree at allocation. If no free segment of
sufficient size was found, the area (`sa` or `sua`) is instead expanded.
If two or more free segments with equal size exist, the one at lowest
-address is choosen for `sa` and highest address for `sua`.
+address is chosen for `sa` and highest address for `sua`.
At carrier deallocation, we want to coalesce with any adjacent free
segments, to form one large free segment. To do that, all free
diff --git a/erts/emulator/internal_doc/ThreadProgress.md b/erts/emulator/internal_doc/ThreadProgress.md
index 6118bcf0f6..03a802f904 100644
--- a/erts/emulator/internal_doc/ThreadProgress.md
+++ b/erts/emulator/internal_doc/ThreadProgress.md
@@ -60,7 +60,7 @@ threads are managed threads.
### Thread Progress Events ###
Any thread in the system may use the thread progress functionality in
-order to determine when the following events have occured at least
+order to determine when the following events have occurred at least
once in all managed threads:
1. The thread has returned from other code to a known state in the
@@ -160,7 +160,7 @@ calling the following functions:
* `int erts_thr_progress_leader_update(ErtsSchedulerData *esdp)` -
Leader update thread progress.
-Unmanaged threads can delay thread progress beeing made:
+Unmanaged threads can delay thread progress being made:
* `ErtsThrPrgrDelayHandle erts_thr_progress_unmanaged_delay(void)` -
Delay thread progress.
@@ -251,7 +251,7 @@ doing so. If not zero, the leader isn't allowed to increment the
global counter, and needs to wait before it can do this. When it is
zero, it swaps the `waiting` and `current` counters before increasing
the global counter. From now on the new `waiting` counter will
-decrease, so that it eventualy will reach zero, making it possible to
+decrease, so that it eventually will reach zero, making it possible to
increment the global counter the next time. If we only used one
reference counter it would potentially be held above zero for ever by
different unmanaged threads.
@@ -261,7 +261,7 @@ prevent the next increment of the global counter, but instead the
increment after that. This is sufficient since the global counter
needs to be incremented two times before thread progress has been
made. It is also desirable not to prevent the first increment, since
-the likelyhood increases that the delay is withdrawn before any
+the likelihood increases that the delay is withdrawn before any
increment of the global counter is delayed. That is, the operation
will cause as little disruption as possible.
diff --git a/erts/emulator/internal_doc/Tracing.md b/erts/emulator/internal_doc/Tracing.md
index 728f315263..7f97f64765 100644
--- a/erts/emulator/internal_doc/Tracing.md
+++ b/erts/emulator/internal_doc/Tracing.md
@@ -51,7 +51,7 @@ the new instrumented code. Normally loaded code can only be reached
through external functions calls. Trace settings must be activated
instantaneously without the need of external function calls.
-The choosen solution is instead for tracing to use the technique of
+The chosen solution is instead for tracing to use the technique of
replication applied on the data structures for breakpoints. Two
generations of breakpoints are kept and indentified by index of 0 and
1. The global atomic variables `erts_active_bp_index` will determine