Real Time Operating Systems

Document technical information

Format pdf
Size 1.5 MB
First found May 22, 2018

Document content analysis

Language
English
Type
not defined
Concepts
no text concepts found

Persons

Organizations

Places

Transcript

Advanced Microcontrollers
Grzegorz Budzyń
Lecture 12:
Real Time Operating Systems
Plan
•
•
•
•
•
•
•
Introduction
Tasks
Queues
Interrupts
Resources
Memory management
Multiprocessor operation
Introduction
What’s an Operating System?
• Provides environment for executing programs
• Process abstraction for
multitasking/concurrency
– Scheduling
•
•
•
•
Hardware abstraction layer (device drivers)
Filesystems
Communication
Can be Real-Time
Classic OS vs RTOS
• Desktop OS – OS is in control at all times and runs
applications, OS runs in different address space
• RTOS – OS and embedded software are integrated,
ES starts and activates the OS – both run in the same
address space (RTOS is less protected)
• RTOS includes only service routines needed by the ES
application
• RTOS vendors: FreeRTOS, VxWorks, VTRX, Nucleus,
LynxOS, uC/OS
• Desirable RTOS properties: use less memory,
application programming interface, debugging tools,
support for variety of microprocessors, alreadydebugged network drivers
Most Real-Time Systems are
embedded
• An embedded system is a computer built into a
system but not seen by users as being a computer
• Examples
–
–
–
–
–
–
FAX machines
Copiers
Printers
Scanners
Routers
Robots
Characteristics of Real-Time Operating
Systems
• Deterministic
– Operations are performed at fixed,
predetermined times or within predetermined
time intervals
– Concerned with how long the operating system
delays before acknowledging an interrupt
Characteristics of Real-Time Operating
Systems
• Responsiveness
– How long, after acknowledgment, it takes the
operating system to service the interrupt
– Includes amount of time to begin execution of the
interrupt
– Includes the amount of time to perform the
interrupt
Characteristics of Real-Time Operating
Systems
• User control
– User specifies priority
– Specify paging
– What processes must always reside in main
memory
– Disks algorithms to use
– Rights of processes
Characteristics of Real-Time Operating
Systems
• Reliability
– Degradation of performance may have
catastrophic consequences
– Attempt either to correct the problem or
minimize its effects while continuing to run
– Most critical, high priority tasks execute
Characteristics of Real-Time Operating
Systems
• Fail-soft operation
– ability of a system to fail in such a way as to preserve as
much capability and data as possible;
– the RTOS tries to correct the problem or minimize its
effects while continuing to run;
– RTOS is stable, i.e. it will meet the deadlines of its most
critical, highest-priority tasks, even if some less critical task
deadlines are not always met.
Features of Real-Time Operating
Systems
• Fast context switch
• Small size
• Ability to respond to external interrupts
quickly
• Multitasking with interprocess
communication tools such as semaphores,
signals, and events
Features of Real-Time Operating
Systems
• Use of special sequential files that can
accumulate data at a fast rate
• Preemptive scheduling based on priority
• Minimization of intervals during which
interrupts are disabled
• Delay tasks for fixed amount of time
• Special alarms and timeouts
RTOS – is it necessary?
• Not always
• Simplest approach: cyclic executive
loop
do part of task 1
do part of task 2
do part of task 3
end loop
Cyclic Executive Plus Interrupts
• Works fine for many signal processing
applications
• Insanely cheap, predictable interrupt handler:
– When interrupt occurs, execute a single userspecified instruction
– This typically copies peripheral data into a circular
buffer
– No context switch, no environment save, no delay
Drawbacks of CE + Interrupts
•
•
•
•
Main loop still running in lockstep
Programmer responsible for scheduling
Scheduling static
Sporadic events handled slowly
Cooperative Multitasking
•
•
•
•
•
A cheap alternative
Non-preemptive
Processes responsible for relinquishing control
Examples: Original Windows
A process had to periodically call get_next_event() to
let other processes proceed
• Drawbacks:
– Programmer had to ensure this was called frequently
– An errant program would lock up the whole system
• Alternative: preemptive multitasking
Preemptive Multitasking
• Tasks are swapped either voluntarily or
automaticaly by the RTOS
• Tasks normally selected based upon priority
Preemptive Multitasking - example
RTOS KERNEL
Tasks
Tasks
• A task – a simple subroutine
• ES application makes calls to the RTOS functions to start tasks,
passing to the OS, start address, stack pointers, etc. of the tasks
• Task States:
–
–
–
–
Running
Ready (possibly: suspended, pended)
Blocked (possibly: waiting, dormant, delayed)
[Exit]
• Scheduler – schedules/shuffles tasks between Running and
Ready states
– Blocking is self-blocking by tasks, and moved to Running state via other
tasks’ interrupt signaling (when block-factor is removed/satisfied)
– When a task is unblocked with a higher priority over the ‘running’ task,
the scheduler ‘switches’ context immediately (for all pre-emptive
RTOSs)
Tasks
• Most tasks are blocked or ready most of the
time because generally only one task can run at
a time per CPU
• The number of items in the ready queue can
vary greatly, depending on the number of tasks
the system needs to perform and the type of
scheduler that the system uses
Tasks and Data
• Each tasks has its won context - not shared, private
registers, stack, etc.
• In addition, several tasks share common data (via
global data declaration; use of ‘extern’ in one task to
point to another task that declares the shared data
• Shared data cause the ‘shared-data problem’ or use
of ‘Reentrancy’ characterization of functions
Reentrant functions
• Reentrancy – A function that works correctly
regardless of the number of tasks that call it
between interrupts
• Characteristics of reentrant functions –
– Only access shared variable in an atomic-way, or
when variable is on callee’s stack
– A reentrant function calls only reentrant functions
– A reentrant function uses system hardware
(shared resource) atomically
Semaphores and Shared Data
• Semaphore – a variable/lock/flag used to control
access to shared resource (to avoid shared-data
problems in RTOS)
• Protection at the start is via primitive function, called
take, indexed by the semaphore
• Protection at the end is via a primitive function,
called release, also indexed similarly
• Simple semaphores – Binary semaphores are often
adequate for shared data problems in RTOS
Semaphores and Shared Data
• Semaphore Problems
– The initial values of semaphores – when not set properly
or at the wrong place
– The ‘symmetry’ of takes and releases – must match or
correspond – each ‘take’ must have a corresponding
‘release’ somewhere in the ES application
– ‘Taking’ the wrong semaphore unintentionally (issue with
multiple semaphores)
– Holding a semaphore for too long can cause ‘waiting’
tasks’ deadline to be missed
Semaphores and Shared Data
– Variants:
• Binary semaphores – single resource, one-at-a time, alternating in
use (also for resources)
• Counting semaphores – multiple instances of resources,
increase/decrease of integer semaphore variable
• Mutex – protects data shared while dealing with priority inversion
problem
– Summary – Protecting shared data in RTOS
• Disabling/Enabling interrupts (for task code and interrupt
routines), faster
• Taking/Releasing semaphores (can’t use them in interrupt
routines), slower, affecting response times of those tasks that
need the semaphore
• Disabling task switches (no effect on interrupt routines), holds all
other tasks’ response
Task Synchronization
•
•
•
•
Events
Messages
Semaphores
Mutexes
Semaphores
Semaphores
• Invented by Edgser Dijkstra in the mid-1960s
• Offered by most multitasking kernels
• Used for:
– Mutual exclusion
– Signaling the occurrence of an event
– Synchronizing activities among tasks
Semaphores (cont.)
• A semaphore is a key that the code acquires in order
to continue execution
• If the key is already in use, the requesting task is
suspended until the key is released
• There are two types
– Binary semaphores
• 0 or 1
– Counting semaphores
• >= 0
Semaphore Operations
• Initialize (or create)
– Value must be provided
– Waiting list is initially empty
• Wait (or pend)
– Used for acquiring the semaphore
– If the semaphore is available (the semaphore value is
positive), the value is decremented, and the task is not
blocked
– Otherwise, the task is blocked and placed in the waiting
list
– Most kernels allow you to specify a timeout
– If the timeout occurs, the task will be unblocked and an
error code will be returned to the task
Semaphore Operations (cont.)
• Signal (or post)
– Used for releasing the semaphore
– If no task is waiting, the semaphore value is
incremented
– Otherwise, make one of the waiting tasks ready to
run but the value is not incremented
– Which waiting task to receive the key?
• Highest-priority waiting task
• First waiting task
Sharing I/O Devices
Encapsulating a Semaphore
Applications of Counting Semaphores
• A counting semaphore is used when a
resource can be used by more than one task
at the same time
• Example:
– Managing a buffer pool of 10 buffers
Tasks scheduling
Scheduling Algorithms in RTOS
• What is it?
– Multi-tasking requires all tasks get scheduled to run on
CPU according to some pre pre-determined scheme
• Types of Scheduling:
– Co-operative, Pre-emptive
– Round-robin, Deadline Monotonic, Least Slack Slack-Time
etc.
• Issues:
– Task dead-lines
• Missed dead-lines may have severe consequences
– Context Switch Time
Scheduling Algorithms in RTOS
• Off-line scheduling algorithms
– The algorithm is executed on the entire task set
before actual task activation.
– The schedule generated in this way is stored in a
table and later executed by a dispatcher.
• On-line scheduling algorithms
– The scheduling decisions are taken at run-time
every time a new task enters the system or when
a running task terminates.
Priority inversion
• Priority inversion: low-priority process keeps
high-priority process from running.
• Improper use of system resources can cause
scheduling problems:
– Low-priority process grabs I/O device.
– High-priority device needs I/O device, but can’t
get it until low-priority process is done.
• Can cause deadlock.
Solving priority inversion
• Give priorities to system resources.
• Have process inherit the priority of a resource
that it requests.
– Low-priority process inherits priority of device if
higher.
Context-switching time
• Non-zero context switch time can push limits
of a tight schedule.
• Hard to calculate effects---depends on order
of context switches.
• In practice, OS context switch overhead is
small.
Off-line scheduling algorithms
• The main advantage of this approach is that
the run-time overhead is low and does not
depend on the complexity of the scheduling
algorithm used to build the schedule.
• However, the system is quite inflexible to
environmental changes.
On-line scheduling algorithms
• With on-line algorithms, each task is assigned a
priority, according to one of its temporal
parameters.
• These priorities can be either fixed priorities or
dynamic priorities
– fixed priorities: based on fixed parameters and assigned to
the tasks before their activation
– dynamic priorities: based on dynamic parameters that may
change during system evolution
• When task activations are not known, an on-line
guarantee test has to be done every time a new task
enters the system.
Scheduling Algorithms in RTOS
• Off-line scheduling algorithms:
– Clock Driven Scheduling
– Weighted Round Robin Scheduling
• On-line scheduling algorithms:
– Static:
• Rate monotonic
• Inverse deadline (deadline monotonic)
– Dynamic:
• Earliest deadline first
• Least laxity first
Scheduling Algorithms in RTOS
• Clock Driven
– Simplest
– All parameters about jobs (execution
time/deadline) known in advance.
– Schedule can be computed offline or at some
regular time instances.
– Minimal runtime overhead.
– Not suitable for many applications.
Scheduling Algorithms in RTOS
• Weighted Round Robin
– Jobs scheduled in FIFO manner
– Time quantum given to jobs is proportional to it’s weight
– Example use : High speed switching network
• QoS guarantee.
– Not suitable for precedence constrained jobs.
• Job A can run only after Job B. No point in giving time quantum to
Job B before Job A.
Rate Monotonic Scheduling
• For a set of periodic tasks, assigning the
priorities according to the rate monotonic
(RM) algorithm means that tasks with shorted
periods (higher request rates) get higher
priorities.
• It is an optimal, preemptive, static-priority,
scheduling algorithm used in real-time
operating systems.
• If a task set cannot be scheduled using the
RMA algorithm, it cannot be scheduled using
any static-priority algorithm.
Rate Monotonic Scheduling
• The inputs to the algorithm are processes (tasks,
threads) with:
– No resource sharing (processes do not share resources,
e.g. a hardware resource, a queue, or a semaphore)
– Deterministic deadlines exactly equal to periods
– Static priorities (whenever a processor is free or a new
task period begins, the task with the highest static priority
is selected to preempt all other tasks)
– Static priorities assigned according to the rate monotonic
principle (tasks with shorter periods/deadlines are given
higher priorities)
RMS
• The RM algorithm assigns different priorities
proportional to the frequency of tasks
• RM can schedule a set of tasks to meet
deadlines if total resource utilization is less
than 69.3%
• The RM algorithm provides no support for
dynamically changing task periods and/or
priorities and tasks that may experience
priority inversion
RMS - example
RMS Summary
• One major limitation of fixed-priority
scheduling and RMS is that it is not always
possible to fully utilize the CPU.
Inverse deadline algorithm
• Inverse deadline allows a weakening of the
condition which requires equality between
periods and deadlines in static-priority
schemes.
• The inverse deadline algorithm (IDA) assigns
priorities to tasks according to their deadlines:
– The task with the shortest relative deadline is
assigned the highest priority
IDA - example
Earliest deadline first algorithm
• The EDF algorithm assigns priority to tasks
according to their absolute deadlines:
– The task with the earliest deadline will be
executed at the highest priority.
• This algorithm is optimal in the sense that
feasibility.
– If there exists a feasible schedule for a task set,
then EDF is able to find it.
Earliest deadline first algorithm
• EDF does not make any assumption about the
periodicity of the tasks; hence it can be used
for scheduling periodic as well as aperiodic
tasks.
EDF implementation
• On each timer interrupt:
– compute time to deadline;
– choose process closest to deadline.
• Generally considered too expensive to use in
practice.
EDF - example
Least Laxity first algorithm
• The LLF algorithm assigns priority to tasks
according to their relative laxity.
– The task with the smallest laxity will be executed
at the highest priority.
• LLF is optimal and the schedulability of a set
of tasks can be guaranteed using the EDF
schedulability test.
Least Laxity first algorithm
• When a task is executed, its relative laxity is
constant.
• The relative laxity of ready tasks decreases.
• When the laxity of the tasks is computed only
at arrival times, the LLF schedule is equivalent
to EDF.
• If the laxity is computed at every time t, more
context-switching will be necessary.
LLF – example 1
LLF – example 2
Hybrid task scheduling
• Some real-time applications may require
aperiodic tasks.
• Hybrid task sets contain both types of tasks.
– Periodic tasks usually have hard timing constraints
and are scheduled with one of the four basic
algorithms.
– Aperiodic tasks have either soft or hard timing
constraints.
Hybrid task scheduling
• The main objective of the system is to
guarantee the schedulability of all the periodic
tasks.
– If the aperiodic tasks have soft real timing
constraints, the system aims to provide good
average response times.
– If the aperiodic tasks have hard timing constraints,
the system aims to maximize the guarantee ratio
of these aperiodic tasks.
Scheduling of soft aperiodic tasks
• Three main types:
– Background scheduling
– Task server
– Slack stealing
Background scheduling
• Aperiodic tasks are scheduled in the
background when there are no periodic tasks
ready to execute.
• Aperiodic tasks are queued according to firstcome-first-serve strategy.
• The major advantage of background
scheduling is its simplicity.
• Its major drawback is that, for high loads due
to periodic tasks, response time of aperiodic
requests can be high.
Background scheduling - example
Task Servers
• A server is a periodic task whose purpose is to server
aperiodic requests.
• A server is characterized by a period and a
computation time called server capacity.
• The server is scheduled with the algorithm used for
the periodic tasks and, once it is active, it serves the
aperiodic requests within the limit of server capacity.
• The ordering of aperiodic requests does not depend
on the scheduling algorithm used for periodic tasks.
Types of Task Servers
• Polling server
– The simplest servers serves pending aperiodic
request at regular intervals equal to its period.
• Deferred server, priority exchange server
sporadic server
– More improved
– Better aperiodic responsiveness
Polling server
• The polling server becomes active at regular
intervals equal to its period.
• It serves pending aperiodic requests within
the limit of its capacity.
– If no aperiodic quests are pending, the polling
server suspends itself until the beginning the its
next period and the time originally reserved for
aperiodic requests is used by periodic tasks.
Polling Server - example
Deferrable server
• The deferrable server is an extension of the
polling server which improves the response
time of aperiodic requests.
• The deferrable server looks like the polling
server, with some differences:
– the deferrable server preserves its capacity if no
aperiodic requests are pending at the beginning of
its period.
– Thus an aperiodic request that enters the system
just after the server suspends itself can be
executed immediately.
Sporadic server
• Like the deferrable server, the sporadic server
preserves its capacity until an aperiodic
request occurs.
• It differs in the way it replenishes this
capacity.
• It does not recover its capacity to its full value
at the beginning of each new period, but only
after it has been consumed by aperiodic task
executions.
Sporadic Server - example
Slack stealing and joint scheduling
• These two techniques are quite similar and
both use the laxity of the periodic tasks to
schedule aperiodic tasks.
• Slack stealing
– The tasks are scheduled with RMA
• Joint scheduling
– The tasks are scheduled with EDF
Slack stealing and joint scheduling
• Unlike the server techniques, they do not
require the use of a periodic task for aperiodic
task service.
• Each time an aperiodic request enters the
system, time for servicing this request is made
by ‘stealing’processing time from the periodic
tasks without causing deadline missing.
– The laxity of the periodic tasks is used to schedule
aperiodic requests as soon as possible.
Slack Stealing - example
Scheduling of hard aperiodic tasks
• The hard aperiodic task can be mapped onto a
periodic task and scheduled with the periodic
task set – not always usable!
• Two main types:
– Background scheduling
– Joint scheduling of aperiodic and periodic tasks
Background scheduling
• The principle of this technique consists in
scheduling aperiodic tasks in the background
when there are no periodic tasks ready to
execute according to EDF.
• The aperiodic requests have hard timing
constraints and as they are accepted, they are
queued according to a strict increasing order
of deadlines.
Joint scheduling of aperiodic and periodic tasks
• Each time a new aperiodic task enters the
system, a new EDF schedule is built with a
task set which is composed of the periodic
requests, the previously accepted requests,
and the new request.
• If this schedule meets all the deadlines, then
the new requests is accepted.
Example
Message passing
Message passing
• Tasks must be able to communicate with one
another to coordinate their activities or to
share data.
• Tasks can use shared data and semaphores to
allow taskcommunication.
• There are several other methods that most
RTOSs offer:
– queues, mailboxes, and pipes
What is a queue
• Queues are the primary form of intertask
communications
• Can be used to send messages between tasks,
and between interrupts and tasks
• In most cases they are used as thread safe
FIFO (First In First Out) buffers with new data
being sent to the back of the queue, although
data can also be sent to the front
What is a queue
• Messages are sent through queues by copy,
not by reference!!!
Queue - remarks
• Most RTOSs require to initialize the queues
before use
• There can be many queues
• If the code tries to write to a full queue then
the RTOS either returns an error or blocks the
task until some other task reads data from the
queue and thereby creates some space
What is a mailbox
• Mailboxes are like queues
• The typical RTOS has functions:
– to create,
– to write to,
– To read from mailboxes,
– to check whether the mailbox contains any
messages.
– to destroy the mailbox if it is no longer needed
What is a mailbox
• Some RTOSs allow a certain number of
messages in each mailbox (chosen during
compile time),
• Others allow only one message in a mailbox at
a time.
• In certain RTOS, mailbox messages can be
prioritized.
What is a pipe
• Pipes are also like queues.
• The RTOS can create, write , read, and so on.
• Pipes in some RTOSs are entirely byteoriented.
• Some RTOSs use the standard C library
functions fread and fwrite to read from and
write to pipes
Queue, mailbox, pipe - problems
• #1: Queues, mailboxes, and pipes make it easy
to share data among tasks, but they also make
it easy to insert bugs
• #2: Most RTOSs do not restrict which tasks
can read from or write to any given queue,
mailbox, or pipe
• #3: RTOS cannot ensure that data written
onto a queue, mailbox, or pipe will be
properlyinterpreted by the task that reads it
Queue, mailbox, pipe - problems
• #4: Running out of space in queues,
mailboxes, or pipes is usually a disaster for
embedded software
• #5: Passing pointers from one task to another
through a queue, mailbox, or pipe is one of
several ways to create shared data
inadvertently
Interrupts
Interrupts
• Embedded applications running on top of
(RTOSes) require Interrupt Service Routines
(ISRs) to handle interrupts generated by
external events
• Since application code execution is
interrupted (delayed) during the execution of
an ISR, the amount of code in the ISR should
be minimized
RTOS Interrupt Architectures
• Base problem: supporting asynchronous
access to internal RTOS data structures by
interrupt routines and RTOS services
• Modifications to the same structure can be
catastrophic
• Two solutions:
– Unified Interrupt Architecture
– Segmented Interrupt Architecture
Unified Interrupt Architecture
• Interrupts are locked out while an ISR or
system service is modifying critical data
structures inside the RTOS
Segmented Interrupt Architecture
• Less popular approach is not to allow any
asynchronous access to critical data structures by
ISRs or other service calls
• Service call access to critical data structures from an
ISR is “deferred” to a secondary routine - ISR2
• ISR2 is executed along with application threads
under scheduler control
Interrupts - nesting
• Nesting means when an interrupt source call
of higher priority occurs then the control is
passed to higher priority and on return from
the higher priority the lower priority ISR starts
executing.
• Each ISR on letting a higher priority interrupt
call sends the ISM to the RTOS.
• Common stack for the ISR nested calls, similar
to the nested function calls.
Resources
Resource Allocation in RTOS
• Resource Allocation
– The issues with scheduling applicable here.
– Resources can be allocated in
• Weighted Round Robin
• Priority Based
• Some resources are non preemptible
– Example : semaphores
• Priority Inversion if priority scheduling is used
Memory management
Memory management
• Two types of memory management in RTOSs:
– The first type is used to provide tasks with
temporary data space
– The second type of memory management is used
to dynamically swap code in and out of main
memory
Memory management
• First type:
– The system’s free memory is divided into fixed
sized memory blocks, which can be requested by
tasks
– When a task finishes using a memory block it must
return it to the pool
– Access to pools can be prioritized
Memory management
• Second type – used techinques:
– Memory swapping method keeps the OS and one
task in memory at the same time. When another
task needs to run, it replaces the first task in main
memory, after the first task and its context have
been saved to secondary memory
– In the Overlays method, the code is partitioned
into smaller pieces, which are swapped from disk
to memory. In this way, programs larger than the
available memory can be executed
Memory management
• Second type – used techinques:
– In MFT (multiprogramming with a fixed number of
tasks) method, a fixed number of equalized code
parts are in memory at the same time. As needed,
these parts are overlaid from disk
– MVT (variable number of tasks) method is similar
to MFT except that the size of the partition
depends on the needs of the program in MVT
– Demand paging systems have fixed-size “pages”
that are given to programs as needed in noncontinuous memory.
Multiprocessor operation
Multiprocessor operation
• Most RTOSs that are multiprocessor-capable
use a separate instance of the kernel on each
processor
• The multiprocessor ability comes from the
kernel’s ability to send and receive
information between processors
Multiprocessor operation
• In many RTOSs that support multiprocessing, there is
no difference between the single processor case and
the multiprocessor case from the task’s point of view
• The RTOS uses a table in each local kernel that
contains the location of each task in the system
• When one task sends a message to another task the
local kernel looks up the location of the destination
task and routes the message appropriately.
• From the task’s point of view all tasks are executing
on the same processor
RTOS – Design hints
Basic Design using RTOS
• Large number of tasks - pros:
– better control of the priorities and by this of the
relative response times,
– better modularity,
– cleaner code,
– more effective encapsulation of data,
– better hardware sharing,
– simpler tasks
Basic Design using RTOS
• Large number of tasks - cons:
–
–
–
–
–
more data sharing,
more semaphores,
more time on handling semaphores
more bugs,
more time on message passing between tasks
Basic Design using RTOS
• Use as few tasks as possible
• Add more tasks to your design only for clear
reason
• Write short ISRs
• Avoid creating and destroying tasks while the
system is running, because:
– it is time consuming
– it may be difficult to destroy a task without leaving
something behind;
– it may be better to create all the tasks at system
startup and leave them
RTOS – market view
RTOS - Market view
• RTOSes on the market can be categorized into:
– RTOS for small footprint, mobile and connected
devices:
• FreeRTOS
• μC/OS
– RTOS for complex, hard real-time applications
– General purpose RTOS in the embedded industry
• VxWorks
– RTOS for the Java Platform
– Objected-oriented RTOS
Thank you for your attention
References
[1] http://www.freertos.org/FreeRTOSPlus/Instructor_Led_FreeRTOS_Training/FreeRTOS_Training.shtml
[2] http://www.nunoalves.com/classes/spring_2012_cpe355/cpe355-04-c.pdf
[3] http://rtos.com/articles/18835
[4] http://13thmonkey.org/documentation/ARM/HAI.pdf
[5] http://www.csie.ntu.edu.tw/~ktw/rts/rt-kernel-prn.pdf
[6]
http://www.ee.up.ac.za/main/_media/en/undergrad/subjects/emk310/1326.
pdf
[7] http://www.ece.stevenstech.edu/~ymeng/courses/CPE555/papers/rtos_paper.pdf
[8]
http://netlab.cse.yzu.edu.tw/~bpmania/%C5%F8%C0Y/%AD%D7%BD%D2/95
1%20%B4O%A4J%A6%A1%A7Y%AE%C9%[email protected]%B7~%A8t%B2%CE/%C1%BF%B
8q/12%20Scheduling%20of%20Independent%20Tasks.pdf
×

Report this document