TweetFollow Us on Twitter

Thread Manager
Volume Number:10
Issue Number:11
Column Tag:Essential Apple Technology

Related Info: Process Manager Memory Manager

Thread Manager for Macintosh Applications

Apple’s Development Guide

By Apple Computer, Inc.

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

This article will provide the reader with the motivation, architecture, and programmatic interface of the Thread Manager. The architecture section will give some detail of how the Thread Manager is integrated into the Macintosh environment and some of the assumptions made in its design. The programmatic interface will then be described with commentary on the use of each of the routines. The end contains information such as current known bugs and compatibility issues.

Product Definition

The Thread Manager is the current MacOS solution for lightweight concurrent processing. Multithreading allows an application process to be broken into simple subprocesses that proceed concurrently in the same overall application context. Conceptually, a thread is the smallest amount of processor context state necessary to encapsulate a computation. Practically speaking, a thread consists of a register set, a program counter, and a stack. Threads have a fast context switch time due to their minimal context state requirement and operate within the application context which gives threads full application global access. Since threads are hosted by an application, threads within a given application share the address space, file access paths and other system resources associated with that application. This high degree of data sharing enables threads to be "lightweight" and the context switches to be very fast relative to the heavyweight context switches between Process Manager processes.

An execution context requires processor time to get anything done, and there can be only one thread at a time using the processor. So, just like applications, threads are scheduled to share the CPU, and the CPU time is scheduled in one of two ways. the Thread Manager will provide both cooperative and preemptive threads. Cooperative threads explicitly indicate when they are giving up the CPU. Preemptive threads can be interrupted and gain control at (most) any time. The basis for the difference is that there are many parts of the MacOS and Toolbox that can not function properly when interrupted and/or executed at arbitrary times. Due to this restriction, threads using such services must be cooperative. Threads that do not use the Toolbox or OS may be preemptive.

Cooperative threads operate under a scheduling model similar to the Process Manager, wherein they must make explicit calls for other cooperative threads to get control. As a result, they are not limited in the calls they can make as long as yielding calls are properly placed. Preemptive threads operate under a time slice scheduling model; no special calls are required to surrender the CPU for other preemptive or cooperative threads to gain control. For threads which are compute-bound or use MacOS and Toolbox calls that can be interrupted, preemptive threads may be the best choice; the resulting code is cleaner than if partial results were saved and control then handed off to other threads of control.

Part 1: requirements summary

The Thread Manager is an operating system enhancement that will allow applications to make use of both cooperative & preemptive multitasking within their application context on all 680x0 based Macintosh platforms, and cooperative multitasking on PowerPC based Macintoshes. There are two basic types of threads (execution contexts) available: cooperative and preemptive. The different types of threads are distinguished by their scheduling models.

The benefits of per-application multitasking are numerous. Many applications are best structured as several independent execution contexts. An image processing application might want to run a filter on a selected area and still allow the user to continue work on another portion of an image. A database application may allow a user to do a search while concurrently adding entries over a network. With the Thread Manager, it is now possible to always make applications responsive to the user, even while executing other operations. The Thread Manager also gives applications an easy way to organize multiple instances of same or similar code. In each example it is possible to write the software as one thread of execution, however, application code may be simplified by writing each class of operation as a separate thread and letting the Thread Manager handle the interleaving of the threaded execution contexts.

These examples are not intended to be exhaustive, but they indicate the opportunities to exploit the Macintosh system and build complex applications with this technology. The examples show that the model for multiple threads of control must support a variety of applications and user environments. The Thread Manager architecture will, where possible, use the current Macintosh programming paradigms and preserve software compatibility. The Thread Manager enhances the programming model of the Macintosh for there is little need to develop Time Manager or VBL routines to provide the application with a preemptive execution context. There is also no need to save the complete state of a complex calculation in order to make WaitNextEvent or GetNextEvent calls to be user responsive - simply yield to give the main application thread a chance to handle interface needs.

Hardware Compatibility

The 680x0 version of the Thread Manager has the same hardware requirements as System 7.0, that is, at least 2 megabytes of memory, and a Macintosh Plus or newer CPU. The power version of the Thread Manager requires any Power Macintosh.

Software Compatibility

System 7.0 or greater is required for the 680x0 version of the Thread Manager to operate. The power version of the Thread Manager requires system software for Power Macintosh platforms.

Existing applications that know nothing about the Thread Manager have nothing to fear. The extent of the Thread Manager's influence is to set up the main application thread when the application is launched, and to make an appearance every so often as the preemption timer fires off. Because there is only the application thread, the preemption timer has nothing to do and quietly returns. Thus, the Thread Manager is nearly transparent to existing applications, and no compatibility concerns are expected. New applications, of course, can reap the full benefits of concurrent programming, including a fairly powerful form of multitasking.

The power version of the Thread Manager is built as a Shared Library, named ThreadsLib, that is fully integrated into the Thread Manager.

Intended Users

Developers will gain the ability to have multiple, concurrent, logically separate threads of execution within their application. The Thread Manager will provide Macintosh developers with a state of the art foundation for building the next generation of applications using a multi-threaded application programming environment. Another, less obvious user is system software which operates in the application context. The rule of thumb is: Code which operates only within an application context can use the Thread Manager, code that does not, can not.

Programmatic Interface Description

The Thread Manager performs creation, scheduling and deletion of threads. It allows multiple independent threads of execution within an application, each having its own stack and state. The client application can change the scheduler or context switch parameters to optimize an application for a particular usage pattern.

Applications will interface with the Thread Manager through the use of the Trap mechanism we know and love. The API is well defined, compelling, and easy to use-no muss, no fuss. For those who need to get down and dirty, the Thread Manager provides routines to modify the behavior of the scheduling mechanism and context switching code.

The API goes through a single trap: ThreadDispatch. Parameters are transferred on the stack and all routines return an OSErr as their result. The trap dispatch numbers have both a parameter size and a routine number encoded in them which allows older versions of the Thread Manager to safely reject calls implemented only by newer versions. A paramErr is returned for calls not implemented.

The ThreadsLib is a shared library so there is no performance hit due to a trap call, when using the Thread Manager API. There is a distinction between 680x0 threads and power threads. A 680x0 application may only use 680x0 threads, and power applications may only use power threads. Mixing thread types (power or 680x0) within an application type (power or 680x0) is considered a programming error and is not supported.

Performance

The context switch time for an individual thread is negligible due to the minimal context required for a switch. The default context saved by the Thread Manager includes all processor data, address, and FPU (when required) registers. The thread context may be enhanced by the application (to include application specific context) which will increase context switch times (your mileage may vary).

Both cooperative and preemptive threads are eligible for execution only when the application is switched in by the process manager. In this way, all threads have the full application context available to them and are executed only when the application gets time to run.

The interleave design of one cooperative context between every preemptive context guarantees that threads which can use the Toolbox (cooperative threads) will be given CPU time to enhance user interface performance.

Part 2: Functional specifications

Features Overview

Per-application thread initialization is completed prior to the entering the application, which allows applications to begin using the Thread Manager functions as soon as their main routine begins execution. Thread clean up is not required as this is done by the Thread Manager at application termination time.

Applications are provided with general purpose routines for thread pool creation, counting, allocation and deletion. Basic scheduling routines are provided to acquire the ID of the currently executing thread and to yield to any thread. Preemptive thread scheduling routines allow a thread to disable preemption during critical sections of code. Advanced scheduling routines give the ability to yield to a particular thread, and get & set the state of any thread. Mechanisms are also provided to customize the thread scheduler and add custom context switching routines.

Software Design & Technical Description

Installation: During system startup, the Thread Manager is installed into the system and sets up system-wide globals and patch code.

Initialization: Per-application initialization is done prior to entering the application. This allows applications to take advantage of the Thread Manager functions as soon as they begin execution of the main application thread. Important: The Memory Manager routine MaxApplZone must be called before any thread other than the main application thread allocates memory, or causes memory to be allocated (see the Constraints, Gotchas & Bugs section for more information).

Cleanup: The Thread Manager is called by the Process Manager when an application terminates. This gives the Thread Manager a chance to tear down the threading mechanism for the application and return appropriate system resources, such as memory.

Control: The Thread Manager gets control in three ways. The straightforward way is through API calls made by a threaded application. All calls to the Thread Manager are made through the trap ThreadDispatch (0xABF2). The less straightforward way is via hardware interrupts to give the Thread Manager preemption scheduler a chance to reschedule preemptive threads. For power applications, the Thread Manager is called through the use of library calls to the Thread Manager shared library.

Thread Types: The Thread Manager allows applications to create and begin execution of two types of threads: cooperative and preemptive. Cooperative threads make use of a cooperative scheduling model and can make use of all Toolbox and operating system functions. This type of thread has all the rights and privileges of regular application code, which includes the use all Toolbox and OS features available to applications today. For 680x0 applications only, preemptive threads make use of a preemptive scheduling model and may not make use of Toolbox or operating system services; only those Toolbox or operating system services which may be called from an interrupt service routine may be called by preemptive threads. The Toolbox and OS calling restrictions include traps like LoadSeg which get called on behalf of your application when an unloaded code segment needs to be loaded. Important: Be sure to preload all code segments that get used by preemptive threads. Also note that preemptive threads, like interrupt service routines, may not make synchronous I/O requests.

Main Application Thread: The main application thread is a cooperative thread and contains the main entry point into the application. This thread is guaranteed to exist and can not be disposed of. All applications will have one main application thread, even if they are not aware of the Thread Manager. The main application thread is defined to be responsible for event gathering (via WaitNextEvent or GetNextEvent). If events are pending in the application event queue when a generic yield call is made (no thread ID is specified) by another cooperative thread, the Thread Manager scheduler chooses the main application thread as the next cooperative thread to run. This gives the main application thread a chance to handle events for user responsiveness.

Memory Management: The Thread Manager provides a method of creating a pool of threads. This allows the application to create a thread pool early in its execution before memory has been used or overly fragmented. Threads may be removed from the thread pool on a stack size best fit or exact match basis for better thread pool management. Thread data structures can be allocated at most any time, provided the Memory Manager routine MaxApplZone has been called (see the Constraints, Gotchas & Bugs section for more information). Important: It is considered a programming error to allocate memory, or cause memory to be allocated, during preemptive execution time or from any thread other than the main application thread before MaxApplZone has been called.

Thread stack requirements are determined by the type of thread being created and the application’s specific use of that thread. The stack size of a thread is entirely up to the developer - the Thread Manager can only let the developer know the default size and the currently available thread stack space. Cooperative threads may make Toolbox and OS calls which require a larger stack than threads which can not make such calls. Stack based parameter passing from a thread is fully supported since the Thread Manager does not BlockMove thread stacks in and out of the application’s main stack area. Each thread has its own stack which does not move once allocated.

Scheduling: All scheduling occurs in the context of the currently executing application. When the application gets time to run via the Process Manager, the application’s threads get time via the Thread Manager. Applications which are sleeping, and hence are not scheduled by the Process Manager, do not get their threads executed. Threads are per-application: when the application gets time, its threads get time.

Cooperative and preemptive threads are not given a priority and are scheduled in a round-robin fashion or as dictated by a “yield to” call or a custom scheduler. Both types of threads are guaranteed to begin execution in the normal operating mode of the application. Normal operating mode is defined as the addressing and CPU operation modes into which the application was launched. The operating mode will either be 24 or 32-bit MMU addressing mode, and user or supervisor CPU execution mode. At preemptive reschedule time, the addressing mode of the thread is restored to its preempted state. The CPU operating mode is not changed; rescheduling will only take place if the current thread is executing in the normal application CPU operating mode. If the normal operating mode of the CPU is user mode, and the current thread is executing in supervisor mode when preemption occurs, the Thread Manager does not reschedule and will return control back to the interrupted thread.

Cooperative threads get time when an explicit yield call is made to cause a context switch. All the rules that apply to WaitNextEvent or GetNextEvent hold true for yield calls across cooperative threads. For example, no assumptions can be made about the placement of unlocked handles.

Preemptive threads are not required to make yield calls to cause a context switch (although they certainly may) and share 50% of their CPU time with the currently executing cooperative thread. However, calling yield from a preemptive thread is desirable if that thread is not currently busy.

With the advent of multiply threaded applications comes the issue of data coherency. This is a problem where one thread of execution (either cooperative or preemptive) is looking at shared data while another thread is changing it. The Thread Manager provides a solution to this problem by providing the application with the ability to define a “critical” section of code which locks out preemption. With preemption disabled, a thread may look at or change shared or global data safely. The “critical” code mechanism is provided through the use of the ThreadBeginCritical and ThreadEndCritical calls. ThreadBeginCritical increments a counter semaphore and tells the Thread Manager to lock out the preemption mechanism. ThreadEndCritical does just the opposite - when the counter semaphore reaches zero, the preemption mechanism is re enabled. ThreadBeginCritical/ThreadEndCritical pair provides developers with the building blocks needed for direct semaphore support.

Writing A Custom Thread Scheduler Routine: Preemption is disabled when the custom scheduler is called which prevents, among other things, reentrancy problems. There should be no yield or other scheduling calls made at this time. The custom scheduler is provided with a data record defining thread ID information which includes the size of the data record (for important future directions), the current thread ID, the suggested thread ID (which may be kNoThreadID), and the currently interrupted cooperative thread (or kNoThreadID). In addition to this information the custom scheduler must have knowledge about the threads it wishes to schedule. If the custom scheduler does not wish to select a thread it can pass back the suggested thread ID (or kNoThreadID) as the thread to schedule and let the Thread Manager's default scheduler decide. If the custom scheduler does not know about all of the threads belonging to the application (it may not if the system creates threads on behalf of the application), it should occasionally send back the suggested thread ID (or kNoThreadID) to give other threads a chance to be scheduled. Note that due to the round robin scheduling approach, the ‘other’ threads are not guaranteed to be next in line for scheduling. If the interrupted cooperative thread ID variable is not ‘nil’, the custom scheduler was called during the execution of a preemptive thread and must not schedule a cooperative thread other than the interrupted cooperative thread. Scheduling a cooperative thread at this time would effectively be causing cooperative thread preemption which could result in a system misunderstanding (crash).

Important: Scheduling with native threads is less complicated because there are no preemptive threads. Meaning the only way rescheduling happens is when a thread yields to any other thread.

Context: The default context of a thread consists of the CPU registers, the FPU registers if an FPU is present, and a few lowmem globals. Specifically, the saved data is as follows:

CPU Registers FPU Registers

RD0 - RD7 FPCR, FPSR, FPIAR

RA0 - RA7 FP0 - FP7

SR (incl. CCR) FPU frame

For power applications, the context looks something like this:

CPU Registers FPU Registers Machine Registers

R0-R31 FP0-FP31 CTR, LR, PC

FPSCR CR, XER

The thread context lives on a thread’s A7 stack and the location of the thread context is saved at context switch time. The A5 register which contains a pointer to the application’s “A5 world” and the initial thread MMU mode is initially set the same as the main application thread. This allows all threads to share in the use of the application’s A5 world which gives threads access to open files and resource chains, for example. The MMU mode of a thread is saved away, and the mode of the interrupted thread is restored to allow preemption of threads which change the MMU operating mode. The FPU context is fully saved along with the current FPU frame.

Writing a Custom Thread Context Switching Routine: Preemption is disabled when the custom switching routine is called which prevents, among other things, reentrancy problems. There should be no yield or other scheduling calls made at this time. When a custom context switching routine is called, thread context is in transition, so calls to GetCurrentThread and uses of kCurrentThreadID will not be appropriate. Custom switching routines are defined on a per-thread in or out basis. Each thread is treated separately, which allows threads to mix and match custom switchers and parameters. A custom context switcher may be defined for entering a thread and another for exiting the same thread. Each context switching procedure is passed a parameter to be used at the application’s discretion. For example, there could be one custom switching routine that is installed with a different parameter on each thread.

Note: If a custom thread switcher-inner is installed, it will be called before the thread begins execution at the thread entry point.

Important: The entire context is saved by ThreadsLib for any native application. This is due to the fact that compilers can use all the registers during optimization, even the floating point ones.

Programmatic Interface

Data Types

The Thread Gestalt selector and bit field definition are used to determine if the threads package is installed. The gestaltThreadsPresent bit in the result will be true if the Thread Manager is installed. Other bits in the result field are reserved for future definition.


/* 1 */
CONST
 gestaltThreadMgrAttr= ‘thds’;{Thread Manager attributes}
 gestaltThreadMgrPresent= 0;{bit true if Threads present}
 gestaltSpecificMatchSupport= 1; {bit true if ‘exact match’ API supported}
 gestaltThreadsLibraryPresent= 2;  {bit true if ThreadsLib is present}

The ThreadState data type indicates the general operational status of a thread. A thread may be waiting to execute, suspended from execution, or executing.


/* 2 */
TYPE
 ThreadState= INTEGER;

CONST
 kReadyThreadState = 0;   {thread is eligible to run}
 kStoppedThreadState = 1; {thread is not eligible to run}
 kRunningThreadState = 2; {thread is running}

The ThreadTaskRef is used to allow calls to the Thread Manager at a time when the application context is not necessarily the current context.


/* 3 */
TYPE
 ThreadTaskRef = Ptr;

The ThreadStyle data type indicates the broad characteristics of a thread. A cooperative thread is one whose execution environment is sufficient for calling Toolbox routines (this requires a larger stack, for example). A preemptive thread is one that does not need to explicitly yield control, and executes preemptively with all other threads.


/* 4 */
TYPE
 ThreadStyle= LONGINT;

CONST
 kCooperativeThread= 1; {thread can use Macintosh Toolbox}
 kPreemptiveThread = 2;   {thread doesn't necessarily yield}

Note: kPreemptiveThread is not defined for use with power Thread Manager.

The ThreadID data type identifies individual threads. ThreadIDs are unique within the scope of the application process. There are a few pre-defined symbolic thread IDs to make the interface easier.


/* 5 */
TYPE
 ThreadID = LONGINT;

CONST
 kNoThreadID= 0; {no thread at all}
 kCurrentThreadID= 1;{thread whose context is current}
 kApplicationThreadID= 2; {thread created for app at launch}

The ThreadOptions data type specifies options to the NewThread routine.


/* 6 */
TYPE
 ThreadOptions = LONGINT;

CONST
 kNewSuspend     = 1;{begin new thread in stopped state}
 kUsePremadeThread = 2; {use thread from supply}
 kCreateIfNeeded = 4;{allocate if no premade exists}
 kFPUNotNeeded   = 8;{don’t save FPU context}
 kExactMatchThread = 16;  {force exact match over best fit}

The Following information is supplied to a custom scheduler.

Note: kFPUNotNeeded is ignored by the power Thread Manager because floating point registers are always saved.


/* 7 */
TYPE
SchedulerInfoRecPtr= ^SchedulerInfoRec;
SchedulerInfoRec = RECORD
 InfoRecSize:    LONGINT;
 CurrentThreadID:ThreadID;
 SuggestedThreadID:ThreadID;
 InterruptedCoopThreadID: ThreadID;
 END;

The following are the type definitions for a thread's entry routine, a custom scheduling routine, custom context switching routine, and a thread termination routine.


/* 8 */
TYPE 
 ThreadEntryProcPtr= ProcPtr; {entry routine}
 { FUNCTION ThreadMain (threadParam: LONGINT): LONGINT; }

 ThreadSchedulerProcPtr = ProcPtr; {custom scheduler}
 { FUNCTION ThreadScheduler (schedulerInfo: SchedulerInfoRec): ThreadID; 
}

 ThreadSwitchProcPtr = ProcPtr;    {custom switcher}
 {PROCEDURE ThreadSwitcher (threadBeingSwitched: ThreadID; 
 switchProcParam: LONGINT);}

 ThreadTerminationProcPtr = ProcPtr; {custom switcher}
 { PROCEDURE ThreadTerminator (threadTerminated: ThreadID; 
 terminationProcParam:LONGINT);}

The following are the type definitions to allow a debugger to watch the creation, deletion and scheduling of threads on a per-application basis.


/* 9 */
TYPE
 DebuggerNewThreadProcPtr = ProcPtr;
 { PROCEDURE DebuggerNewThread (threadCreated: ThreadID); }

 DebuggerDisposeThreadProcPtr = ProcPtr;
 { PROCEDURE DebuggerDisposeThread (threadCreated: ThreadID); }

 DebuggerThreadSchedulerProcPtr = ProcPtr;
 { FUNCTION DebuggerThreadScheduler(schedulerInfo: SchedulerInfoRec): 

 ThreadID; }
The following are Thread Manager specific errors.

CONST
 threadTooManyReqsErr= -617;
 threadNotFoundErr = -618;
 threadProtocolErr = -619;

General Purpose Routines

These routines allow the application to create, initiate, and delete threads.


/* 10 */
FUNCTION CreateThreadPool (threadStyle: ThreadStyle;
 numToCreate:  INTEGER; stackSize: Size):OSErr;

CreateThreadPool creates a specified number of threads having the given style and stack requirements. The thread structures are put into a supply for later allocation by the NewThread routine. This function may be called repeatedly, which will add threads to the one application thread pool. A pool of threads may be needed if, for example, preemptive threads need to spawn threads. Preemptive threads may only create new threads from an existing thread pool; this is to prevent Toolbox reentrancy if memory allocation must be made to satisfy the request.

If not all of the threads could be created, none are allocated (it’s all or nothing!).

Note: Threads in the allocation pool can not be individually targeted by any of the Thread Manager routines (i.e. they are not associated with ThreadIDs). The only routines that refer to threads in the allocation pool are NewThread and GetFreeThreadCount, but they address the application pool as a whole.

Note: The stackSize parameter is the requested stack size for this set of pooled threads. This stack must be large enough to handle saved thread context, normal application stack usage, interrupt handling routines and CPU exceptions. By passing in a stackSize of zero <0>, the Thread Manager will use its default stack size for the type of threads being created. To determine the default stack size for a particular thread type, see the ThreadDefaultStackSize routine for more information.


/* 11 */
Result codes:  noErr Specified threads were created and are available
 memFullErr Insufficient memory to create the thread structures
 paramErr Unknown threadStyle, or using kPreemptiveThread 
 with the power Thread Manager

FUNCTION GetFreeThreadCount (threadStyle: ThreadStyle;
 VAR freeCount: INTEGER):OSErr;

GetFreeThreadCount finds the number of threads of the given thread style that are available to be allocated. The number of available threads is raised by a successful call to CreateThreadPool or DisposeThread (with the recycleThread parameter set to “true”). The number is lowered by calls to NewThread when a pre-made thread is allocated.


/* 12 */
Result codes:  noErr freeCount has the count of available threadStyle 
threads
 paramErr Unknown threadStyle, or using kPreemptiveThread
 with the power Thread Manager

FUNCTION GetSpecificFreeThreadCount(threadStyle: ThreadStyle;
 stackSize: Size; VAR freeCount: INTEGER):OSErr;

GetSpecificFreeThreadCount finds the number of threads of the given thread style and stack size that are available to be allocated. The number of available threads is raised by a successful call to CreateThreadPool or DisposeThread (with the recycleThread parameter set to “true”). The number is lowered by calls to NewThread when a pre-made thread is allocated.


/* 13 */
Result codes:  noErr freeCount has the count of available threadStyle 
threads
 paramErr Unknown threadStyle, or using
 kPreemptiveThread with the power Thread Manager

FUNCTION GetDefaultThreadStackSize (threadStyle: ThreadStyle;
 VAR stackSize: Size):OSErr;

GetDefaultThreadStackSize returns the default stack needed for the type of thread requested. The value returned is the stack size used if zero <0> is passed into the CreateThreadPool & NewThread stackSize parameter. This value is by no means absolute, and most threads do not need as much stack space as the default value. This and the ThreadCurrentStackSpace routines are provided to help tune your threads for optimal memory usage.


/* 14 */
Result codes:  noErr stackSize has the default stack needed for threadStyle 
threads
 paramErr Unknown threadStyle, or using
 kPreemptiveThread with the power Thread Manager

FUNCTION ThreadCurrentStackSpace (thread: ThreadID;
 VAR freeStack: LONGINT):OSErr;

ThreadCurrentStackSpace returns the current stack space available for the desired thread. Be aware that various system services will run on your thread stack (interrupt routines, exception handlers, etc.) so be sure to account for those in your stack usage calculations. See GetDefaultThreadStackSize for more information.


/* 15 */
Result codes:  noErr freeCount has the amount of stack space available 
in thread
 threadNotFoundErr There is no existing thread with 
 the specified ThreadID

Note: When using this routine from a preemptive thread, care must be taken when obtaining information about another thread. It is not always possible to know the stack environment of a preempted thread - the Toolbox may have temporarily changed stacks to perform its functions.


/* 17 */
FUNCTION NewThread (threadStyle: ThreadStyle;
 threadEntry: ThreadEntryProcPtr;
 threadParam: LONGINT;
 stackSize: Size;
 options: ThreadOptions;
 threadResult: LongIntPtr;
 VAR threadMade: ThreadID):OSErr;

NewThread creates or allocates a thread structure with the specified characteristics, and puts the thread's identifier in the threadMade parameter. The threadEntry parameter is the entry address of the thread, and is best represented as a Pascal-style function. The threadParam parameter is passed as a parameter to that function for application defined uses. When the thread terminates, the function result is put into threadResult (pass nil for threadResult if you are not interested in the thread's result). In the case of an error returned, the threadMade parameter is set to kNoThreadID.

The ThreadOptions parameter specifies optional behavior of NewThread. Thread options are summed together to create the desired combination of options. The kNewSuspend option indicates that the new thread should begin in the kStoppedThreadState, ineligible for execution. The kUsePremadeThread option requests that the new thread be allocated from an existing pool of premade threads. By default, threads allocated from the thread pool are done so on a stack size best fit basis. The kExactMatchThread option requires threads allocated from the pool to have a stack size which exactly matches the stack size requested by NewThread. The kCreateIfNeeded option gives NewThread permission to allocate an entirely new thread if the supply allocation request can not be honored. The kFPUNotNeeded option will prevent FPU context from being saved for the thread. This option will speed the context switch time for a thread that does not require FPU context.

Important: The storage for threadResult needs to be available when the thread terminates. Therefore, an appropriate storage place would be in the application globals or as local variable to the application's main routine. An inappropriate place would be as a local variable to a subroutine that completes before the thread terminates.

Important: Preemptive threads may only call this routine if the kUsePremadeThread option is set.

Note: The stackSize parameter is the requested stack size of the new thread. This stack must be large enough to handle saved thread context, normal application stack usage, interrupt handling routines and CPU exceptions. By passing in a stackSize of zero <0>, the Thread Manager will use its default stack size for the type of threads being created. To determine the default stack size for a particular thread type, see the ThreadDefaultStackSize routine for more information.

Note: ThreadsLib will not allow you to create preemptive threads, as well as it ignores the kFPUNotNeeded option, since all of the native context has to saved.


/* 18 */
Result codes:  noErr Specified thread was made or allocated
 memFullErr Insufficient memory to create the thread structure
 threadTooManyReqsErr There are no matching thread structures available
 paramErr Unknown threadStyle, or using kPreemptiveThread with 
 the power Thread Manager

FUNCTION DisposeThread (threadToDump: ThreadID; threadResult:  LONGINT; 
recycleThread: BOOLEAN):OSErr;

DisposeThread gets rid of the specified thread. The threadResult parameter is passed on to the thread's creator (see NewThread). The recycleThread parameter specifies whether to return the thread structure to the allocation pool supply, or to free it entirely.


/* 19 */
Result codes:     noErr   Specified thread was disposed
 threadNotFoundErr There is no existing thread with the specified ThreadID
 threadProtocolErr ThreadID specified the application thread

Note: Disposing a thread from a preemptive thread will force the disposed thread to be recycled regardless of the recycleThread setting. Returning from a thread causes itself to be disposed.

Basic Scheduling Routines

These routines allow the application to get information about and have basic scheduling control of the current thread, without specific attention to the other threads in the application.


/* 20 */
FUNCTION GetCurrentThread (VAR currentThreadID: ThreadID):OSErr;

GetCurrentThread finds the ThreadID of the current thread, and stores it in the currentThreadID parameter.


/* 21 */
Result codes:     noErr   current ThreadID returned
 threadNotFoundErr There is no current thread

FUNCTION YieldToAnyThread : OSErr;

YieldToAnyThread relinquishes the current thread's control, causing generalized rescheduling. The current thread suspends in the kReadyThreadState, awaiting availability of the CPU. When the thread is again scheduled, this routine regains control and returns to the caller.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates. However, threads may be preempted in any CPU addressing mode.


/* 22 */
Result codes:     noErr Current thread has yielded and is now running 
again.
 threadProtocolErr Current thread is in a critical section
  (see ThreadBeginCritical)

Preemptive Thread Scheduling Routines

These routines are useful when the application includes preemptive threads.


/* 23 */
FUNCTION ThreadBeginCritical : OSErr;

ThreadBeginCritical indicates to the Thread Manager that the current thread is entering a critical section with respect to all other threads in the current application. This prevents preemptive scheduling to prevent interference from the other threads. Note that this routine is not needed if there are no active preemptive threads in the application.

Note: Critical sections may be nested.

Important: Preemptive threads may be interrupted to execute a cooperative thread, so critical sections can exist in them, as well.


/* 24 */
Result codes:  noErr Current thread can now execute critical section

FUNCTION ThreadEndCritical : OSErr;

ThreadEndCritical indicates to the Thread Manager that the current thread is exiting a critical section.


/* 25 */
Result codes:  noErr Current thread is now out of most nested critical 
section
 threadProtocolErr Current thread is not in a critical section
  (see ThreadBeginCritical)

Advanced Scheduling Routines

These routines allow the application to schedule threads with a greater control and responsibility. Typically, an application-wide view of threads is needed when applying these routines.


/* 26 */
FUNCTION YieldToThread (suggestedThread: ThreadID):OSErr;

YieldToThread relinquishes the current thread's control, causing generalized rescheduling, but passes the suggestedThread to the scheduler. The current thread suspends in the kReadyThreadState, awaiting availability of the CPU. When the thread is again scheduled, this routine regains control and returns to the caller.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates. Preemptive threads should never explicitly yield to cooperative threads. Doing so would in effect be causing preemption between cooperative threads.


/* 27 */
Result codes:     noErr Current thread has yielded and is now running 
again.
 threadNotFoundErr There is no existing thread with the specified id,
 or the suggested thread is not in the ready state.
 threadProtocolErr Current thread is in a critical section (see 
 ThreadBeginCritical)

FUNCTION GetThreadState (threadToGet: ThreadID; 
 VAR  threadState:ThreadState):OSErr;

In the presence of preemptive threads, the state of any thread can change asynchronously (at any time). This implies that the value returned from GetThreadState might be inaccurate by the time the caller checks it. If absolute correctness is required, this call should be made while preemptive scheduling is disabled, such as in a critical section (delimited by ThreadBeginCritical & ThreadEndCritical) or during the custom scheduling routine (see SetThreadScheduler).


/* 28 */
Result codes:     noErr   threadState contains the specified thread's 
state
 threadNotFoundErr There is no existing thread with the specified ThreadID

FUNCTION SetThreadState (threadToSet: ThreadID; 
 newState:ThreadState;  suggestedThread: ThreadID):OSErr;

SetThreadState puts the specified thread into the specified state. If the current thread is specified, and newState is either kReadyThreadState or kStoppedThreadState, rescheduling occurs and suggestedThreadID is passed on to the scheduler.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates.


/* 29 */
Result codes:     noErr   Thread was put in the specified state.  If 
this was the
 current thread, it is now running again
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID, or the suggested thread is not in the               
 ready state.
 threadProtocolErr Caller attempted to suspend/stop the desired thread, 
 but the desired thread is in a critical section (see
 ThreadBeginCritical), or newState is an invalid state.

FUNCTION SetThreadStateEndCritical (threadToSet: ThreadID;
   newState:ThreadState; suggestedThread: ThreadID):OSErr;

SetThreadStateEndCritical atomically puts the specified thread into the specified state and exits the currents thread’s critical section. If the current thread is specified, and newState is either kReadyThreadState or kStoppedThreadState, rescheduling occurs and suggestedThreadID is passed on to the scheduler. This call is useful in cases where the current thread needs to put itself in a stopped state at the end of a critical section, thereby closing the scheduling window between a call to ThreadEndCritical and SetThreadState.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates.


/* 30 */
Result codes:     noErr   Thread was put in the specified state.  If 
this was the     current thread, it is now running again
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID, or the suggested thread is not in the               
 ready state.
 threadProtocolErr Current thread is not in a critical section (see 
 ThreadBeginCritical), or newState is an invalid state.

FUNCTION GetThreadCurrentTaskRef ( VAR threadTRef:       
 ThreadTaskRef):OSErr;

GetThreadCurrentTaskRef returns an application process reference for later use, potentially at interrupt time. This task reference will allow the Thread Manager to get & set information for a particular thread during any application context.


/* 31 */
Result codes:  noErr Thread task reference was returned

FUNCTION GetThreadStateGivenTaskRef ( threadTRef: ThreadTaskRef; 
 threadToGet: ThreadID; VAR threadState: ThreadState ):OSErr;

GetThreadStateGivenTaskRef returns the state of the given thread in a particular application. The primary use of this call is for completion routines or interrupt level code which must acquire the state of a given thread at times when the application context is unknown.


/* 32 */
Result codes:     noErr   threadState contains the specified thread's 
state
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID & TaskRef
 threadProtocolErr Caller passed in an invalid TaskRef.

FUNCTION SetThreadReadyGivenTaskRef( threadTRef: ThreadTaskRef; 
 threadToSet: ThreadID ):OSErr;

SetThreadReadyGivenTaskRef will mark a stopped thread as ready and eligible to run, but will not be put in the ready queue until the next time rescheduling occurs. Threads marked as stopped are the only types eligible to be marked as ready by this routine. An example use of this routine is to allow a completion routine to unblock the thread which stopped itself after making an asynchronous I/O call.


/* 33 */
Result codes:     noErr   The specified thread is marked as ready.
 threadNotFoundErr There is no existing thread with the specified ThreadID
 & TaskRef
 threadProtocolErr Caller attempted to mark a thread ready that is not 
in      the stopped state, or caller passed in an invalid 
 TaskRef. 

FUNCTION SetThreadScheduler (threadScheduler:
 ThreadSchedulerProcPtr):OSErr;

SetThreadScheduler installs a custom thread scheduler, replacing any current custom scheduler. A threadScheduler of nil specifies “none”.

Important: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom scheduler. Be sure to set up register A5 before accessing global data.


/* 34 */
Result codes:  noErr Specified scheduler was installed

FUNCTION SetThreadSwitcher (thread: ThreadID;
 threadSwitcher: ThreadSwitchProcPtr;
 switchProcParam: LONGINT; inOrOut: BOOLEAN):OSErr;

SetThreadSwitcher installs a custom thread context switching routine for the specified thread in addition to the standard processor context which is always saved. A threadSwitcher of nil specifies “none”. The inOrOut parameter indicates whether the routine is to be called when the thread is switched in (inOrOut is “true”), or when the thread is switched out (inOrOut is “false”). The switchProcParam specifies a parameter to be passed to the thread switcher.

Each thread is treated separately, so threads are free to mix and match custom switchers and parameters. For example, there could be one custom switching routine that is installed with a different parameter on each thread.

Important: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom switcher. Be sure to set up register A5 before accessing global data.


/* 35 */
Result codes:     noErr   Specified thread switcher was installed
 threadNotFoundErr There is no existing thread with the specified ThreadID

FUNCTION SetThreadTerminator (thread: ThreadID;
 threadTerminator: ThreadTerminationProcPtr;
 terminationProcParam: LONGINT):OSErr;

SetThreadTerminator installs a custom thread termination routine for the specified thread. The custom thread termination routine will be called at the time a thread is exited or is manually disposed of. The terminationProcParam specifies a parameter to be passed to the thread terminator.

Each thread is treated separately, so threads are free to mix and match custom terminators and parameters. For example, there could be one custom termination routine that is installed with a different parameter on each thread.


/* 36 */
Result codes:     noErr   Specified thread terminator was installed
 threadNotFoundErr There is no existing thread with the specified ThreadID

Thread Debugging Support

The following routine is set aside for debuggers to install watchdog procedures when the major state of a thread changes. These routines are reserved for use by debuggers to help in the development of multithreaded applications.


/* 37 */
FUNCTION SetDebuggerNotificationProcs (
 notifyNewThread: DebuggerNewThreadProcPtr;
 notifyDisposeThread: DebuggerDisposeThreadProcPtr;            
 notifyThreadScheduler: DebuggerThreadSchedulerProcPtr 
 ):OSErr;

SetDebuggerNotificationProcs sets the per-application support for debugger notification of thread birth, death and scheduling. The debugger will be notified with the threadID of the newly created or disposed of thread. The debugger is also notified if the thread simply returns from its highest level of code and thus automatically disposes itself. The DebuggerThreadSchedulerProcPtr will be called after the custom scheduler and the Thread Manager's generic scheduler have decided on a thread to schedule. In this way, the debugger gets the last shot at a scheduling decision.

Important: All three debugger callbacks are installed when this call is made. It is not possible to set one or two of the callbacks at a time with this routine, and it is not possible to chain these routines. This restriction ensures that the last caller of this routine owns all three of the callbacks. Setting a procedure to NIL will effectively disable it from being called. Also note that the application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your debugger procedures.


/* 38 */
Result codes:  noErr Debugger procs have been installed
Routines That Move Or Purge Memory:

CreateThreadPool
NewThread
DisposeThread    - When ‘recycleThread’ is false

Routines You Can Call During Preemptive Thread Execution:

NewThread - When ‘kUsePremadeThread’ is used
DisposeThread    - When ‘recycleThread’ is true
GetCurrentThread
GetFreeThreadCount
GetDefaultThreadStackSize
ThreadCurrentStackSpace
GetThreadState
SetThreadState   - See note below
SetThreadStateEndCritical - See note below
ThreadBeginCritical
ThreadEndCritical
YieldToAnyThread
YieldToThread    - See note below
SetThreadScheduler
SetThreadSwitcher
SetDebuggerNotificationProcs
GetThreadCurrentTaskRef

Note: The SetThreadState, SetThreadStateEndCritical, and YieldToThread routines are usable during preemptive execution only when suggestedThread parameter is either kNoThreadID or is another preemptive thread. Explicitly requesting a cooperative thread to run from a preemptive thread is dangerous and should be avoided.


/* 39 */
Routines You Can Call At Interrupt Time:

GetThreadStateGivenTaskRef
SetThreadReadyGivenTaskRef

Toolbox & OS Routines You Can Call From a Cooperative Thread:
• All routines are available from a cooperative thread after MaxApplZone 
has been called.
• On a Mac Plus, only the main application thread may make resource manager 
calls (explicitly UpdateResFile & CloseResFile).
Toolbox & OS routines You Can Call From a Preemptive Thread:
Preemptive threads must follow the same rules as interrupt service routines 
as to which Toolbox and OS calls they may make.

Part 3 - Gotchas & Bugs

Gotcha: The Memory Manager routine MaxApplZone must be called before any thread other than the main application thread allocates memory, or causes memory to be allocated. See Inside Macintosh Memory for information on using memory and expanding the application heap.

Gotcha: Making certain calls to the Toolbox & OS during preemptive thread execution is a programming error; calls which may not be made at interrupt time may not be made by a preemptive thread. This includes calls to LoadSeg which get made on behalf of the application when accessing code segments which have not yet been loaded into memory. Applications must be sure that all code segments used by preemptive threads are preloaded and do not get unloaded. One method if insuring certain traps do not get called at the wrong time is to define custom context switchers for preemptive threads. A custom context switcher-inner could be written to save and change the trap address of the trap in question (say LoadSeg) to a routine which drops into the debugger if that trap gets called while the thread is switched in. A custom context switcher-outer would then restore the original trap address for the rest of the application.

Gotcha: On a Mac Plus only, the main application thread should be the only thread which makes use of the Resource Manager. Specifically, calls to UpdateResFile or CloseResFile should only be made by the main application thread on a Mac Plus. All other Macintoshes support Resource Manager calls from any cooperative thread.

Gotcha: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom call back routines. Be sure to set up register A5 before accessing global data from a custom scheduler, custom switchers, termination procedures, and debugger call back routines.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Summon your guild and prepare for war in...
Netmarble is making some pretty big moves with their latest update for Seven Knights Idle Adventure, with a bunch of interesting additions. Two new heroes enter the battle, there are events and bosses abound, and perhaps most interesting, a huge... | Read more »
Make the passage of time your plaything...
While some of us are still waiting for a chance to get our hands on Ash Prime - yes, don’t remind me I could currently buy him this month I’m barely hanging on - Digital Extremes has announced its next anticipated Prime Form for Warframe. Starting... | Read more »
If you can find it and fit through the d...
The holy trinity of amazing company names have come together, to release their equally amazing and adorable mobile game, Hamster Inn. Published by HyperBeard Games, and co-developed by Mum Not Proud and Little Sasquatch Studios, it's time to... | Read more »
Amikin Survival opens for pre-orders on...
Join me on the wonderful trip down the inspiration rabbit hole; much as Palworld seemingly “borrowed” many aspects from the hit Pokemon franchise, it is time for the heavily armed animal survival to also spawn some illegitimate children as Helio... | Read more »
PUBG Mobile teams up with global phenome...
Since launching in 2019, SpyxFamily has exploded to damn near catastrophic popularity, so it was only a matter of time before a mobile game snapped up a collaboration. Enter PUBG Mobile. Until May 12th, players will be able to collect a host of... | Read more »
Embark into the frozen tundra of certain...
Chucklefish, developers of hit action-adventure sandbox game Starbound and owner of one of the cutest logos in gaming, has released their roguelike deck-builder Wildfrost. Created alongside developers Gaziter and Deadpan Games, Wildfrost will... | Read more »
MoreFun Studios has announced Season 4,...
Tension has escalated in the ever-volatile world of Arena Breakout, as your old pal Randall Fisher and bosses Fred and Perrero continue to lob insults and explosives at each other, bringing us to a new phase of warfare. Season 4, Into The Fog of... | Read more »
Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Marvel Future Fight celebrates nine year...
Announced alongside an advertising image I can only assume was aimed squarely at myself with the prominent Deadpool and Odin featured on it, Netmarble has revealed their celebrations for the 9th anniversary of Marvel Future Fight. The Countdown... | Read more »
HoYoFair 2024 prepares to showcase over...
To say Genshin Impact took the world by storm when it was released would be an understatement. However, I think the most surprising part of the launch was just how much further it went than gaming. There have been concerts, art shows, massive... | Read more »

Price Scanner via MacPrices.net

Amazon is offering a $100 discount on every M...
Amazon is offering a $100 instant discount on each configuration of Apple’s new 13″ M3 MacBook Air, in Midnight, this weekend. These are the lowest prices currently available for new 13″ M3 MacBook... Read more
You can save $300-$480 on a 14-inch M3 Pro/Ma...
Apple has 14″ M3 Pro and M3 Max MacBook Pros in stock today and available, Certified Refurbished, starting at $1699 and ranging up to $480 off MSRP. Each model features a new outer case, shipping is... Read more
24-inch M1 iMacs available at Apple starting...
Apple has clearance M1 iMacs available in their Certified Refurbished store starting at $1049 and ranging up to $300 off original MSRP. Each iMac is in like-new condition and comes with Apple’s... Read more
Walmart continues to offer $699 13-inch M1 Ma...
Walmart continues to offer new Apple 13″ M1 MacBook Airs (8GB RAM, 256GB SSD) online for $699, $300 off original MSRP, in Space Gray, Silver, and Gold colors. These are new MacBook for sale by... Read more
B&H has 13-inch M2 MacBook Airs with 16GB...
B&H Photo has 13″ MacBook Airs with M2 CPUs, 16GB of memory, and 256GB of storage in stock and on sale for $1099, $100 off Apple’s MSRP for this configuration. Free 1-2 day delivery is available... Read more
14-inch M3 MacBook Pro with 16GB of RAM avail...
Apple has the 14″ M3 MacBook Pro with 16GB of RAM and 1TB of storage, Certified Refurbished, available for $300 off MSRP. Each MacBook Pro features a new outer case, shipping is free, and an Apple 1-... Read more
Apple M2 Mac minis on sale for up to $150 off...
Amazon has Apple’s M2-powered Mac minis in stock and on sale for $100-$150 off MSRP, each including free delivery: – Mac mini M2/256GB SSD: $499, save $100 – Mac mini M2/512GB SSD: $699, save $100 –... Read more
Amazon is offering a $200 discount on 14-inch...
Amazon has 14-inch M3 MacBook Pros in stock and on sale for $200 off MSRP. Shipping is free. Note that Amazon’s stock tends to come and go: – 14″ M3 MacBook Pro (8GB RAM/512GB SSD): $1399.99, $200... Read more
Sunday Sale: 13-inch M3 MacBook Air for $999,...
Several Apple retailers have the new 13″ MacBook Air with an M3 CPU in stock and on sale today for only $999 in Midnight. These are the lowest prices currently available for new 13″ M3 MacBook Airs... Read more
Multiple Apple retailers are offering 13-inch...
Several Apple retailers have 13″ MacBook Airs with M2 CPUs in stock and on sale this weekend starting at only $849 in Space Gray, Silver, Starlight, and Midnight colors. These are the lowest prices... Read more

Jobs Board

Relationship Banker - *Apple* Valley Financ...
Relationship Banker - Apple Valley Financial Center APPLE VALLEY, Minnesota **Job Description:** At Bank of America, we are guided by a common purpose to help Read more
IN6728 Optometrist- *Apple* Valley, CA- Tar...
Date: Apr 9, 2024 Brand: Target Optical Location: Apple Valley, CA, US, 92308 **Requisition ID:** 824398 At Target Optical, we help people see and look great - and Read more
Medical Assistant - Orthopedics *Apple* Hil...
Medical Assistant - Orthopedics Apple Hill York Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Now Read more
*Apple* Systems Administrator - JAMF - Activ...
…**Public Trust/Other Required:** None **Job Family:** Systems Administration **Skills:** Apple Platforms,Computer Servers,Jamf Pro **Experience:** 3 + years of Read more
Liquor Stock Clerk - S. *Apple* St. - Idaho...
Liquor Stock Clerk - S. Apple St. Boise Posting Begin Date: 2023/10/10 Posting End Date: 2024/10/14 Category: Retail Sub Category: Customer Service Work Type: Part Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.