Energy Transport

Energy transport inside a star takes place in three ways:

  • Radiation: dependent on opacityflux, density, and temperature. Larger opacity and flux involve steeper decrease in T with increasing r.
    radiative-transport
  • Convection: cool masses move “down” (in the direction of gravitational pull) and hot masses move “up” (against gravitational pull). If the T gradient is too steep, convection can play an important role.
  • Conduction: generally negligible.

We look more closely at convection.

  • Its treatment is very complex, and the theory is not yet fully developed.
  • It relies on Navier-Strokes equations in 3-D.
  • For the study of stellar structure, usually we use 1-D approximations (only spatial variable is r, i.e. we impose spherical symmetry).
  • The pressure scale height (a characteristic length scale for convection) is in the same order as the size of the star, thus convection is coupled to star’s behavior.

The model that we use is that of a hot bubble which (1) rises and expands adiabatically, and then after travelling some distance, (2) it thermalizes with the medium.

This is a thermodynamic model based on state functions (changes in thermodynamic state functions depend only on the starting and final states of the system, and not in the way followed).

The following equation describes how the temperature of the gas inside the bubble changes as the bubble rises and expands adiabatically:

convective-transport

heat-ratio

CP and CV are the specific heat capacities at constant pressure and constant volume respectively.

Superadiabaticity: Case that the star’s actual temperature gradient is steeper than the adiabatic temperature gradient (i.e. inside the bubble). It represents a condition for convection domination:

superadiabatic

 

If “act” is larger than “ad”, all luminosity is carried by convection.

It can be alternatively written as (assuming that the µ -mean molecular weight- does not vary):

superadiabatic-2

 

So, either radiation or convection dominate in different circumstances. When is convection favored?

  1. Conditions of large stellar opacity
  2. Ionization taking place, which means large specific heat, and low adiabatic T gradient.
  3. Low local gravitational acceleration
  4. Large T dependence of nuclear reactions (e.g. CNO cycle and triple alpha)
Advertisements
Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

Nuclear Reactions

What are the nuclear reactions taking place in the interior of the stars, and what are their elementary steps?

There are many. In these reactions, light elements fuse into heavier elements. The more massive is a star and the higher the temperature of its interior, the higher the number of possible reactions available, which transform atoms into heavier and heavier elements in a process known as nucleosynthesis.

The elementary steps of nuclear reactions need to fulfill some conditions:

  • The electric charge needs to be conserved.
  • The number of leptons (electrons, positrons, neutrinos, and antineutrinos) needs to be conserved.
    Conditions: matter leptons – antimatter leptons = constant
    Note: electron + positron = 2 photons (γ) (matter and antimatter annihilate each other producing photons)

elementary-particles

Neutrinos are somewhat special in that they have a very small mass, and a very small cross section, therefore they have very long mean free paths (of the order of 10e9 solar radii!).

In the following, I represent nuclei by the symbol:

symbol-nucleum

One possible reaction is the conversion of hydrogen into helium. This happens via the proton-proton chain or via the CNO cycle, where carbon, nitrogen, and oxygen act as catalysts.

proton-proton-chain

The proton-proton chain consists on 3 branches: PP-I, PP-II, PP-III.

PP-I. Source.

PP-I. Source.

The slowest step in the PP-I chain is the first one, which involves the weak nuclear force.

PP-II. Source.

PP-II. Source.

PP-III. Source.

PP-III. Source.

In an environment similar to the center of the Sun, 69% of the time the PP-I chain takes place, while 31% of the time the PP-II chain occurs. In such environment, the occurrence of the PP-III chain is more rare, with a probability of 0.3%. These probabilities change depending on the temperature on the star.

In summary, the three branches of the PP-chain:

The next pic represents the CNO cycle.

CNO cycle. Source.

CNO cycle. Source.

The cycle above culminates with the production of carbon-12 and helium-4. Another possibility, less probable (0.04%) is the production of oxygen-16 and a photon instead.

The effect of fusing hydrogen into helium is the increase of the mean molecular weight (μ). The star begins to collapse, and this is compensated by a gradual increase of T and ρ.

Therefore, the PP-chain is dominant in low mass stars with low T, while the CNO cycle dominates at higher T and more massive stars.

In the triple alpha process, helium is converted into carbon. The “alpha” comes from the historical mysterious alpha particles, which turned out to be helium-4 nuclei. This process may be thought of as a 3-body interaction, and it has a very dramatic T dependence.

Triple alpha process. Source.

Triple alpha process. Source.

The power laws for the reactions above are follow. Simply note the temperature dependence of each reaction. Each needs larger T than the previous, but the T-dependence is much stronger for the latter (indicated by larger powers). Thus a given increase in T causes a larger and larger increase in the reaction rates:

pp-chain-rate

CNO-cycle-rate

triple-alpha-rate

At sufficiently high temperatures, many more reactions become available, consisting on the formation of heavier and heavier nuclei. Examples are carbon burning reactions, yielding oxygen, neon, sodium, and magnesium, and oxygen burning reactions, yielding magnesium, silicium, phosphorous, and sulphur.

If we plot the binding energy per nucleon (nucleons = neutrons and protons), Eb/A, we obtain a maximum at iron, and a series of stability peaks corresponding to particularly stable nuclei (e.g. helium, carbon, oxygen). These “enhanced stability peaks” are caused by the shell structure of atomic energy levels (similar to the shell structure of the electrons around a nucleus).

Generally speaking, fusion reactions of elements with atomic mass larger than 56, tend to consume energy, while fusion reactions resulting in iron or elements lighter than iron tend to liberate energy.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Python

* Basics: – Difference between Fortran and Python? (compile step, weak vs. strong typed language)

Whereas Fortran is a compiled language, Python is an interpreted language that doesn’t require compilation. Python programs can be tested faster, without need of compiling after each change in the code. It also means that for computationally demanding programs, Fortran will be executed faster. One can take advantage of it by performing the number crunching with Fortran and using Python as a glue.

The second major difference is that whereas Fortran is a strongly typed language (meaning that variable types must be explicitly declared), Python is a weakly typed language, that is, variable types are implicitly assigned by Python, which usually implies less work from the programmer’s side.

– Basic datatypes (integer division!) and type conversion rules Everything, every kind of datatype, in python is called an object.

Integer, float, character, and boolean (True or False). Think of them as made by single elements. Beware of integer division:

>>> 3 / 2
1

To specify that a number is of float type, simply write it as 3. (adding a point):

>>> 3./2
1.5

Floating datatypes have priority over integers in arithmetic operations. This means that if I add for example an integer and a float, the result will be a float.

– Compound datatypes (mutable and immutable): e.g. tuple, string, list, dictionary

Strings: “…”, tuples (..,..), lists […,…], dictionaries { key: value, key:value}. Think on compound data types as lists of objects. Compound datatypes can be acted upon element-wise, and each element has its own corresponding index. Strings can be though of as lists of characters. Tuples are immutable, their elements cannot change once written. Example:

>>> a=(1,2,3)
>>> a[2]
3
>>> a[2] = 4
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment

Note: strings behave like tuples, they’re immutable.

Dictionaries are similar to lists, except that the key (think of it as the index) is also used defined.

– Basic usage of compound datatypes (to be used for what?) Compound datatypes are a way to store collections of objects.

– Variables are not really “variables” but “name-tags” on objects Name-tags point to a specific amount of data in memory. Example:

>>> s = 'hello'
>>> l = s
>>> l
'hello'
>>> s = 'hey'
>>> l
'hello'

Up there, we don’t create two ‘hello’s, there is one ‘hello’ and two name-tags pointing to it (s and l). If we point the name-tag s somewhere else (e.g. ‘hey’), absolutely nothing happens to ‘hello’ and the other name-tags pointing to it.

– What are “side effects”? how to avoid them (if not wanted)? An example for a side effect:

>>> s = [1,2,3]
>>> l = [4,5,s]
>>> l
[4, 5, [1, 2, 3]]
>>> s[1]
2
>>> s[1] = 17
>>> l
[4, 5, [1, 17, 3]]

The logic is:

  1. We create a list s.
  2. We create a list l which points to s.
  3. We modify one object in s.
  4. We see this change also in l, since l is simply pointing at s.

To avoid side effects, an explicit copy of the list can be made by using:

listcopy = listname.copy()

– indexing and slicing

Sequences (e.g. strings, tuples, lists) can be indexed with o[index]. Dictionaries can be indexed by keys. Indexation begins at zero. Slicing takes slices of a sequence. Pay attention to the indexation, and to where the slice is being done. The syntax is: o[first:last] Examples:

>>> a = [0,1,2,3,4,5]
>>> a[:]
[0, 1, 2, 3, 4, 5]
>>> a[0:1] # Think of the slicing index as the commas, rather tha
# the elements. It indicates where we slice.
[0]
>>> a[0:-1]
[0, 1, 2, 3, 4]
>>> a[1:-1]
[1, 2, 3, 4]
>>> a[1:2]
[1]
>>> a[1:3]
[1, 2]

– Use of operators like “+” or “*” for lists or strings

“+” joins strings and lists. Example:

>>> 'hello' + 'world'
'helloworld'

“*” appends one string or list to itself an integer number of times. Example:

>>> 'hello'*3
'hellohellohello'
>>> a = [1,2,3]
>>> a*3
[1, 2, 3, 1, 2, 3, 1, 2, 3]

– I/O and string operations

Whatever is read from a file is a string. For reading and writing from/to files, we use the methods open, write, read, close, readline:

f = open("filename","x") # x is the option, it can be: w (write),                          # rt (read & write), r (read only),
                         # a (append)
f.write("text") # "text" is a string. To write a new line, we use "\n"
f.read()
f.close() # Always close a file once it is not needed
f.readline() # Reads line by line

Extras:

f.closed # Returns a boolean, True for file closed, elsewise False
#
with open("file","w") as name:
  name.write("text") # Automatically closes the file
                     #after the with statement.

* Control Flow:

Examples with each control block. Note the general syntax:

Control statement [condition or iteration]: # Note the colon
  Statement # Note the indentation
Additional control statements: # Again the colon
  Statement # Indentation of whatever happens inside the block
Statement # No indentation indicates that we are outside the 
          # control block

Important: The indentation must be consistant. Choose a fixed number of spaces to indent in each python program. Control blocks inside other control blocks are further indentated:

Control block 1:
  Statement
  Control block 2:
    Statement
  Statement outside CB 2 but inside CB 1
Statement outside CB 1

– if/elif/else

In [3]: if a == 1:
   ...:   print(1)
   ...: elif a == 2:
   ...:   print(2)
   ...: else:
   ...:   print(’A lot’)

– while. I don’t think that the break is required… But can be used to make sure that the program gets out of the control block under some conditions.

while True:
    n = raw_input("Please enter 'hello':")
    if n.strip() == 'hello':
        break

– for

>>> for w in words:
...     print w, len(w)

– How to write the code? What is special about Python?

See the comment above about indentation.

* Writing Code:

– what is a “namespace”?

Namespaces are spaces containing a bunch of names (or variables). Different namespaces would be the namespace of the program, the namespace of a function inside the program, the namespace inside a module.

Each object has its own namespace.

Inside a function, the namespace is local to the function. The namespace of the main program is another one, and is called global. Do not be confused by that: The global namespace is NOT a “shared” namespace available to all objects (as it is in Fortran) Global is just the namespace corresponding to the program. If inside a function we want to use a nametag from the global namespace, we need to use the syntax “global name-tag“. Example:

a = 5
def add2():
 global a # Here we do it 
 return a + 3
a = add2()
print a
>>> 8

A name inside a module can only be used in the main program if it is imported. Example:

>>> pi # Name "pi" does not exist
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
NameError: name 'pi' is not defined
>>> import math # One way to import it
>>> math.pi
3.141592653589793
>>> from math import pi # Another way
>>> pi
3.141592653589793

So a namespace is a way to understand the scope of a name (= variable). A good way to think about namespaces is as boxes where you dump in names. One box cannot have two equal names. If we put a name into a box where there is one name called the same, the new name replaces the old one. In particular, importing names from modules should be done with some care, to make sure that something important is not being lost.

More about namespaces and all that, here.

– what is a module and how to use it?

A module is a file separate of the main program which contains objects. These objects can be easily made available to the main program by importing (see above).

– defining subroutines/functions

Example:

def square( a ):
   # This function returns the square of the argument
   a = a**2
   return a

Note the def keyword needed, the syntax similar to a block statement, and the return command, by which we set the output that is returned to the main program when the function is executed.

– basics of object orientation:

Every object has a class, a method (function of the object), and attributes. Classes can be predefined, or user defined.

xxx add scheme of namespace and “objectspace”

how to construct a class?

Example of constructing a new class called NewClass (I know…):

class NewClass(object):
  def __init__(self):
    self.method = method

This is kinda the minimum possible class (not exactly, but almost. See this for details).

Note the “object” object. This defines the type in the “object tree”.  The object type is the default for a new class. The self.method is an attribute of the class. It is simply a nametag stored inside the namespace of this class. Note also the syntax similar to a function.

After defining a class, we can access any object (nametags, methods, functions, subclasses…) inside this class.

why is it useful? (what is “self”? why is __init__ needed?)

New classes allow to create, hold, and access user defined collections of data in an organised way. It also allows to define ways to operate with this collection of data, and manipulate it.

The self is a way of calling the object of this class, within the class. A class must be defined without any knowledge about the nametags of the objects that the user or the programmer will want to create when they are needed. The self simply represents the nametag for this unknown object. Every method inside the class must have at least one argument, and this is the self argument.

The __init__ (double underscore) method is the way that classes are initialised in python.

* Using Numpy – what is numpy good for? how to use it?

Numpy has built-in methods specifically designed for working with multidimensional arrays and performing fast element-to-element operations. This makes numpy very good for avoiding loops over lists of objects and speeds up execution.

Numpy also has methods to compute vector-vector, vector-matrix, and matrix-matrix products:

np.dot(a,b) # Scalar product

To use numpy, download it and import it to python as a module:

import numpy as np # "np" is an arbitrary shorthand name

bla

– basic objects: arrays (important methods, attributes; how to generate them)

Array generation. Examples:

a = np.array([[1,2,3],[4,5,6]]) # Note () for argument delimitor
# [] for outer list, and [] for inner lists.
a = np.zeros([3,3],dtype="float64") # 3 x 3 array filled with 0
# Double precision
a = np.arange(1000) # 1-D Array with values from 1 to 1000
############################
# Next some general examples:
np.arange(start,end,step)
np.ones((ndim,...)) # The argument is a tuple
np.zeros((ndim,...))
np.empty((n,...)) # Does not initialise the array
np.eye(n) # n x n unit matrix (diagonal = 1, non-diag. = 0)
np.linspace(start,stop,num_elements) # Equally spaced points
np.diag(np.array([1,2,...])) # Diagonal matrix with diagonal =
# 1,2,...
np.random.rand(4)
np.random.random_integers(0,20,15) # 15 random int from 0 to 20

Main methods for arrays:

a.dtype # Datatype of the array
a.shape # Size of the array
len(a.shape) # Number of dimensions
a.ndim # Also number of dimensions
np.ravel() # "Flattens" array
np.reshape((2,3)) # Changes the shape (opposite to ravel)
np.sort(a,axis=1)
a[:,np.newaxis] # Enforces the creation of a new dimension

The main attribute of an array is the type: e.g. “int64”, “int32”, “float64”. The syntax is:

a = array(content or size, dtype="datatype")

– indexing and slicing

Indexing and slicing works same as in regular python. Indexing begins with the element number 0.

Slicing works also same as regular python. A range can be indicated by a colon (e.g. a[1:3]), and a single colon indicates the whole range (see example below).

a = np.array([[1,2,3],[4,5,6]])
a[0,0]
>> 1
a[0,:] # Print only first row.
>> [[1,2,3]]

See that the ordering of rows and columns goes as:

a[row_number,column_number]

In numpy, the last index changes quickest.

– what is “broadcasting”? how do numerical operations work for arrays (and how to exploit this)?

Broadcasting can take place when operating with two arrays of different size. The smallest array will try to operate the largest one as much as it can. The simplest example would be the addition of the number 1 and any array “a“. The number 1 can be thought of as an array of dimension 1 and rank 1. When it operates, it operates on each element of a, thus it is the same as adding to an array of the same size as but made up of ones everywhere.

The same concept would happen to a small array which “fits” a larger array.

xxx Add some visual scheme of broadcasting

Conditions for array operation. If one these is not fulfilled, there will be an error:

  • Arrays have equal size, or
  • Arrays have different size, and for each pair of differing dimensions, one has always size one.

The trick for exploiting this is manipulating the arrays, adding and/or shifting dimensions so that the second condition is fulfilled.

– how to use numpy implicit loops to speed up things

Simple case: we have two vectors, and we want to multiply them element by element. Instead of constructing a loop that goes through each elements and performs the operation, we simply construct the two arrays and multiply them. This speeds up the program by avoiding a loop.

* Example Cases (relevant):

– compute a COM for a molecule

xxx

– enforce periodic boundary conditions

xxx

– use numpy for numerics in non time critical parts of simulation codes

xxx

* Advanced Python/Numpy usage (no details, only the basic concepts)

– what is scipy, usage for fitting or optimization

Scipy is an open source library of algorithms and mathematical tools for the Python programming language. It adds some fortran routines to python. The basic data structure in scipy is the array provided by numpy.

In fact, numpy is a subset of scipy.

xxx map

– parallelization with mpi4py

Parallelization can be used in python by importing modules prepared for this:

from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

Parallelization is implemented in a quite automatized way, so that the used does not need to think much about it. The only detail is that objects that will be shared amongst the processors have to be broadcasted. Example:

if size > 1: # "If nproc > 1, then broadcast self.xyz"
 self.xyz = comm.bcast(self.xyz)

For using mpi in a computer, one must make sure that the openmpi libraries are loaded, e.g.:

module add openmpi-gcc/1.3.3

Parallel python can be run like this:

mpirun -np 2 python program.py

– wrapping of F90 code into Python using f2py

F2py calls fortran from within python. We need numpy for that since arrays are returned as numpy arrays. The program takes care of everything internally, and the user does not have to think about it.

It is probably more efficient to combine python and fortran than to compute everything in fortran. Fortran is better used to perform computationally demanding calculations, while python is by far more flexible for configuration, settings control, postprocessing…

Posted in Uncategorized | Tagged | Leave a comment

Fortran

Elementary Variable Types: What are integer, real, complex, logical, or character variables? Storage demands? Precision of floating point numbers? what is a (parameter) constant?

Memory is distributed in Bytes (sets of 8 bits). One “word” is considered usually 4 or 8 Bytes.

  • Integer: Integer numbers, -1, 0, 1, 2, … Represented exactly by converting them to the binary system.
  • Real: Floating point numbers, 1.3e-6, 4.44e4, … Takes one word of memory.
  • Complex: Complex floating point numbers, represented by two floating point numbers, (3.0e-6,1.3e4). Takes two words of memory.
  • Logical: Boolean; True or False.
  • Character: character strings, ‘this is a string’. Strings have a certain length. Enconded character by character using a table (usually the ASCII code).
  • Floating point numbers: Split into sign, exponent, and mantissa.

floating-point The accuracy of floating points depends on the number of Bytes per word:

  • 32 Bit ~ 6 decimal digits == Single Precision
  • 64 Bit ~ 15 decimal digits == Double Precision

These are declared in the declaration block as:

real(4)   x !Single precision
real(8)   y !Double precision
real(k)   z !Another precision defined by k, e.g. k=16 -> 128 Bits

The default for integers is usually 32 Bit (sufficient accuracy for -2**31 to +2**31). But it can be increased, e.g.

integer(8) m

values of the “kind parameter” depend on compiler, but most compilers provide double precision floating point numbers. Parameters are constant values not computed, and available during compilation. They are also protected from being over-written.

real(8), parameter :: pi = 3.14159265

 Arrays and allocation: Declaration? How are one- and multi-dimensional arrays stored in Fortran? Static, automatic, and dynamic allocation of arrays. 

Declaration:

real(8),allocatable :: mata(:,:)

Allocation:

allocate(mata(ndim1,ndim2),stat_ierr)
if (ierr.ne.0) then
  react to error condition
end if
  • Arrays are stored in successive memory blocks in a given order.
  • Matrices are not stored in some sort of funky matrix-shaped arrangement in the memory, but also successively as an array (see two examples below) in column major order, i.e. the matrix is stored in memory in the order column-row.
  • Index are run faster from first to last (opposite in Python).
  • Array operations are done element-wise. For instance, the product of two arrays gives simply the product of each pair of components (not the mathematical matrix multiplication!)

matrix-storage

Symmetric matrices can be packed as:

matrix-storage-sym

Q: Difference between static, automatic and dynamic allocation?

Array operations

Arrays can be efficiently operated with the BLAS (Basic Linear Algebra Subprograms). It is organized in 3 levels:

  1. Vector operations
  2. Vector-matrix operations
  3. Matrix-matrix operations

The data transfer from memory to CPU often takes longer than arithmetic operations. Therefore, each level has increasingly high performance since it requires the least amount of memory access per  arithmetic operation (Level 3 >> Level 2 > Level 1, in terms of efficiency).

About LAPACK library from Wikipedia:

LAPACK (Linear Algebra PACKage) is a software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squareseigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LUQRCholesky and Schur decomposition.

LAPACK (…) depends upon the Basic Linear Algebra Subprograms (BLAS) (…)

Assignment expressions

Variables are assigned values with “=”. A value can be a number, a logical expression, a mathematical operation…

Arithmetic assignment: variable = arithmetic expression

Operators: +, -, *, /, %, **

Concerning exponentiation: Use integer exponents when possible. DO not use floating point exponents with negative basis of integer or real type, use a complex basis.

Common intrinsic functions that can be used in arithmetic expressions: abs, sqrt, log, log10, exp, cos, sin, tan, acos, asin, atan, cosh, sinh, tanh (all these have one and only one argument).

Control structures in Fortran:

If block and logical if

IF (logical-expression-1) THEN
   statements-1
ELSE IF (logical-expression-2) THEN
   statements-2
ELSE IF (logical-expression-3) THEN
   statement-3
ELSE IF (.....) THEN
   ...........
ELSE
   statements-ELSE
END IF

Select case construct

SELECT CASE (selector)
   CASE (label-list-1)
      statements-1
   CASE (label-list-2)
      statements-2
   CASE (label-list-3)
      statements-3
     .............
   CASE (label-list-n)
      statements-n
   CASE DEFAULT
      statements-DEFAULT
END SELECT

Do loops with and without counter

In fortran, there are at least 3 types of loops:

  1. A counter-loop called the “DO” loop which performs the loop for a specified number of times based on an integer counter.
  2. A loop which repeats itself “while” a certain condition remains true. In Pascal and C, this type of loop is called the while-loop.
  3. I don’t care about this one– An infinite loop which never ends, it is stopped by an external interrupt. For VMS and UNIX, a control-C will do the trick. In the newer languages, this type of loop is forbidden.Source.Do loop:
        do [sl] [counter]=[initial count],[final count],[increment]
           [loop body]
[sl]    continue

While loop:

[sl]    continue
           [loop body]
        if([condition] is true) go to [sl]
        [next program statement]

Cycle and exit statements for do loops (see this)

The CYCLE and EXIT statements specify that the remaining statements in the current iteration of a particular active (enclosing) DO loop are to be skipped. CYCLE specifies that these statements are skipped, but the END DO statement that marks the end of the DO loop be executed—that is, the next iteration, if any, is to be started. If the statement marking the end of the DO loop is not END DO—in other words, if the loop is not a block DO—the CYCLE statement does not execute that statement, but does start the next iteration (if any). EXIT specifies that the loop specified by the DO construct is terminated. The DO loop affected by CYCLE and EXIT is the innermost enclosing DO loop when the following forms are used:

     CYCLE
     EXIT

Otherwise, the following forms specify the construct name of the pertinent DO loop:

     CYCLE construct-name
     EXIT construct-name

Subroutines and functions: see this

Syntax in Fortran

Functions: use a function if you need to do a complicated calculation that has only one result which you may or may not want to subsequently use in an expression.

answer = functionname (argumentl, argument2, . . .)

Function schematic example. Main things:

  • The function definition has the same structure as that of a program, and it is done outside the main program.
  • The name of the function (AVRAGE) is defined in the declaration block of the program. This is the same variable that the function returns.
  • Variables used inside the function, and passed onto the function from the main program don’t require the same name in and out of the function (in the example, A, B, C, and X, Y, Z).
      PROGRAM FUNDEM
C     Declarations for main program
      REAL A,B,C
      REAL AV, AVSQ1, AVSQ2
      REAL AVRAGE
C     Enter the data
      DATA A,B,C/5.0,2.0,3.0/

C     Calculate the average of the numbers
      AV = AVRAGE(A,B,C)
      AVSQ1 = AVRAGE(A,B,C) **2
   	  AVSQ2 = AVRAGE(A**2,B**2,C**2)

	  PRINT *,'Statistical Analysis'
      PRINT *,'The average of the numbers is:',AV
      PRINT *,'The average squared of the numbers: ',AVSQl
      PRINT *,'The average of the squares is: ', AVSQ2
      END

     REAL FUNCTION AVRAGE(X,Y,Z)
     REAL X,Y,Z,SUM
     SUM = X + Y + Z
     AVRAGE = SUM /3.0
     RETURN
     END

Subroutines, on the other hand, can return several results. However, calls to subroutines cannot be placed in an expression.

call subroutinename (argument1, argument2, ...)

Example of subroutines:

	PROGRAM SUBDEM 
        REAL A,B,C,SUM,SUMSQ 
        CALL INPUT( + A,B,C)
   	CALL CALC(A,B,C,SUM,SUMSQ)
   	CALL OUTPUT(SUM,SUMSQ)
   	END

   	SUBROUTINE INPUT(X, Y, Z)
   	REAL X,Y,Z
   	PRINT *,'ENTER THREE NUMBERS => '
 	READ *,X,Y,Z
 	RETURN
 	END

 	SUBROUTINE CALC(A,B,C, SUM,SUMSQ)
   	REAL A,B,C,SUM,SUMSQ
   	SUM = A + B + C
   	SUMSQ = SUM **2
   	RETURN
   	END

   	SUBROUTINE OUTPUT(SUM,SUMSQ)
   	REAL SUM, SUMSQ
   	PRINT *,'The sum of the numbers you entered are: ',SUM
   	PRINT *,'And the square of the sum is:',SUMSQ
   	RETURN
   	END

Subroutines have the same structure as programs too, with declaration and statement blocks. They’re also put outside the main program (that’s why they are “called”). Note that the name of the subroutine is not declared in the main program. The variables passed onto subroutines and/or obtained from them are not required to have the same name in the program and in the subroutine.

Argument passing

Fortran passes arguments by reference. Once the subroutine is called, the memory addresses of the arguments are put on stack. These are used as addresses for the arguments of the subroutine. All modifications are seen by the calling routine.

Arguments can be also passed by value. This is done in C, C++, or Java, for example. Instead of putting the memory addresses on stack, as in passing by reference, copies of the argument values are put on stack. These copies are used in the subroutine, and modifications are not seen by the calling routine.

Passing by reference:

  • Good: simpler to understand, no efficiency issues with arrays.
  • Bad: less data separation.

Passing by value:

  • Good: allows clear separation of data.
  • Bad: you don’t usually want to make copies of large arrays…

Intent attribute for arguments

The intent attribute can be used when variables are passed in or out functions and subroutines. Its usage is recommended for readability. It helps the compiler to detect errors, and allows it to do additional optimisations.

intent(in) means that the variable value can enter, but not be changed

intent(out) means the variable is set inside the procedure and sent back to the main program with any initial values ignored.

intent(inout) means that the variable comes in with a value and leaves with a value (default).

Little example from here:

subroutine square_cube(i,isquare,icube)
  integer, intent(in)  :: i             ! input
  integer, intent(out) :: isquare,icube ! output
  isquare = i**2
  icube   = i**3
end subroutine square_cube

Interface blocks (check this out for more info)

The interface block contains the functions and subroutines and comes after the main program is ended. The interface block is implicit in subroutine and function definitions.

Example:

		
                PROGRAM
                (...)
                END PROGRAM

                Interface
		  Function vector_add(a,b,c,n)
		   real, intent(in) :: a(:),b(:),n
		   real vector_add(size(a))
		   end function vector_add
		end interface

This is not explicitly required, except if pointers are passed as arguments, and in a few other situations.

* Matrix eigenvalue problems: How does one solve small & dense eigenvalue problems? How does one solve large & sparse eigenvalue problems? (some key ideas)

Sparse and dense matrices: from wikipedia

sparse matrix is a matrix populated primarily with zeros (Stoer & Bulirsch 2002, p. 619) as elements of the table. The term itself was coined by Harry M. Markowitz. If the majority of elements differ from zero, then it is common to refer to the matrix as a dense matrix.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices. Sparse data is by nature easily compressed, and this compression almost always results in significantly less computer data storage usage. Indeed, some very large sparse matrices are infeasible to manipulate using standard dense algorithms.

For small enough eigenvalue problems —small and dense matrices, the LAPACK solver for eigenvalue problems is efficient enough. Dense matrices correspond to strongly coupled systems.

The LAPACK subroutines used depend on the type of matrix:

lapack-matrix

See also the beginning of this site:

Eigenvalue problems have also provided a fertile ground for the development of higher performance algorithms. These algorithms generally all consist of three phases:

  1. Reduction of the original dense matrix to a condensed form by orthogonal transformations,
  2. Solution of condensed form, and
  3. Optional backtransformation of the solution of the condensed form to the solution of the original matrix.
  • Reduction of a symmetric matrix to tridiagonal form to solve a symmetric eigenvalue problem: LAPACK routine xSYTRD applies a symmetric block update using the Level 3 BLAS routine xSYR2K; Level 3 BLAS account for at most half the work.
  • Reduction of a rectangular matrix to bidiagonal form to compute a singular value decomposition: LAPACK routine xGEBRD applies a block update using two calls to the Level 3 BLAS routine xGEMM; Level 3 BLAS account for at most half the work.
  • Reduction of a nonsymmetric matrix to Hessenberg form to solve a nonsymmetric eigenvalue problem: LAPACK routine xGEHRD applies a block update. Level 3 BLAS account for at most three-quarters of the work.

Large sparse matrices correspond to loosely coupled systems (i.e. each element of the system is coupled to only a few other elements or neighbours). These are common to problems that involve solving partial differential equations. In such cases, algorithms which work efficiently with small dense matrices, can be slow for large sparse matrices, thus special algorithms need to be used.

We want to solve an eigenvalue problem of the form:

eval-prob

A is assumed to be a Hermitian m*m matrix.

Computational cost of diagonalizing a dense n*n matrix is.

  • Operation counts scale as kn3, 1 < k < 30
  • Memory demands ~< 2n2 + 5n

Something that we want to avoid is computing operations using large arrays. The compiler might create a temporary copy of an already huge array! Instead, we want to perform operations using single components as input, if possible.

Whenever possible, we want to avoid storing large matrices A. Instead, we can use matrix-vector products such as σ = Ab and store σ and b (much less stuff to put in memory).

To find iterative solutions, we choose a starting guess of the eigenvalues that we want to obtain within a defined precision. Then the residuals (error) for approximate eigenpairs are:

error-eigenpairs

We don’t need to find all eigenpairs, only the ones that interest us.

Rayleigh quotient: Can be defined for any vector x different than zero, as:

rayleigh-quotient

Then:

eval-rayleigh-gt-lt

If x is an eigenvector ui of A, then:

evail-eq-rayleigh

Next, I introduce 3 different algorithms to deal with large sparse matrices:

The Power Iterations:

Read this from wikipedia:

In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = λv.

The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.

And check the explanation in the beginning of either of these documents to understand something:
http://www.cs.cornell.edu/~bindel/class/cs6210-f09/lec26.pdf
http://www.math.usm.edu/lambers/mat610/sum10/lecture14.pdf

In sum: Power Iterations provide the dominant eigenvector (that with largest eigenvalue), getting rid of the other eigenvectors by expanding the eigenvalue problem in a power series. Such power series depends on the factor:

power-iteration-lambda

Which tends to zero as k becomes larger, for i != 1.

Algorithm:

For one eigenpair:

power-series

power-series-general

  • The iterations converge linearly to the “dominant” eigenvector (or most extremal one) of A (the one with the largest eigenvalue).
  • The convergence of this method depends on the ratios between the eigenvalues.
  • Extremal eigenvalues are easier to find than inner eigenvalues.
  • A good starting guess is important for convergence.
  • We can increase convergence if we can apply some cheap way of transforming the eigenvalue that we want to find into an isolated extremal eigenvalue (shift).

But power iterations are not very efficient or stable…

During the power iterations, a set of vectors u, Au, A2u, … are generated. The vector space of these vectors is the Krylov space. These have information about other eigenpairs than the most extremal ones. But this information is discarded during power iterations.

About the Krylov space (not necessary, but somewhat clarifying): wikipedia

The Lanczos Methods

Lanczos methods exploit all vectors in the Krylov space.

  • Use an orthonormal basis {v1, v2, …, vm} for the Krylov space (found by Gram-Schmidt orthogonalization). Working with an orthonormal basis improves numerical stability.
  • Project the eigenvalue problem into Krylov space to find a fast solution.

General algorithm:

lanczos

In the orthonormal basis of the Krylov space, the reduced A matrix becomes tridiagonal.

tridiagonal-matrix

This provides a very efficient computation for eigenvalues (~30m2 FLOPS) and eigenvectors (~3m3 FLOPS).

After solving the eigenvalue problem, the norms of the residuals are obtained. If these are smaller than the desired tolerance, the full space of eigenvectors can be obtained:

full-ev-space

The good:

  • Better convergence and more robust than power iterations.
  • Little memory demand.
  • Operation count per iteration only slightly larger than power iterations.

The bad:

  • Gram-Schmidt is numerically unstable, is not suited for the computation of many eigenvalues, and needs a reasonably good starting vector.
  • Cannot deal with degenerate eigenvalues without further modification.
  • Can only find eigenvectors that have some overlap with starting vector. If the starting vector is an eigenvector, we can’t find any additional eigenvectors.

(What is Gram-Schmidt? Check this, it basically enforces orthogonalization vector by vector, and in the end it normalizes each basis vector)

The Jacobi-Davidson Methods

Improve the stability and convergence with respect to Lanczos. Instead of exploiting the Krylov subspace, one can also try to find better basis vectors.

In the Block-Davidson algorithm, we construct new basis vectors in each iteration step, ~1 per each non converged eigenvector.

davidson

A bit more detail on how it works and how we compute t:

davidson-1davidson-2

We then make use of the following:

davidson-3
davidson-4
davidson-5 (we use an approximate matrix which is easier to solve, e.g. a diagonal matrix)

Then, we obtain t by solving this:

davidson-6

The good:

  • Robust and fast convergence.
  • No problem with degenerate eigenvalues.
  • Very flexible

The bad:

  • Higher CPU cost per iteration.
    cpu-cost
    k = eigenpairs
    m = vectors of reduced space
  • Higher storage and I/O demands.

* Ordinary differential equations: xxx

how does one solve them on a computer?

key ideas and characteristics of basic algorithms: Euler, Velocity Verlet, Leapfrog

how does one solve initial value problems and how two-point boundary problems?

* Root searching: bisection and secant method xxx

* Computational “Costs”:

Computational cost can be divided in:

  • CPU time
  • Main memory
  • Disc space

Only CPU time is really critical for scientific computations.

What is an operation or FLOP count?

FLOPS = FLoating point Operations Per Second. It is a measure of the processing speed of a computer.

How can the operation count for simple linear algebra operations be estimated?

Another important quantity to assess the performance required from a computer is the operation count Nop of a program.

For linear algebra tasks, it can be estimated:

  • “Cheap” operations are neglected. =
  • Only floating point multiplications are counted. E.g. multiplication of a n1 x n2 matrix and a n2 x n3 matrix. Nop = n1·n2·n3
  • For other operations, Nop must be evaluated.

What are “wall time” and “cpu time”?

Wall time, or real time, is the time elapsed during the computation, from the start of the program to the end, as it would be measured by a watch.

CPU time: time that the machine has spent doing computations and processing instructions.

What determines the wall time of a program?

The wall time is determined by the CPU time, but also by the “waits” incurred due to I/O communication, time spend waiting for availability of processing power, and “sleep times”, where the program is temporarily interrupted.

Posted in Uncategorized | Tagged | Leave a comment

Energy Sources and Luminosity

Where does the energy emitted by a star come from?

One source is obviously gravitational potential energy U.

grav-pot-e

If r decreases, U decreases too, and can be converted into radiated energy. However, because of the Virial Theorem (E = <U>/2) only 1/2 of the change in can be radiated away (the other half remains as thermal energy to heat the star).

We can calculate U as:

grav-pot-U

This is done by approximating the density ρ as equal to the average density of the star. R is the radius of the star and M is the mass of the star.

Likewise, the total energy E:

grav-pot-totE

Kelvin-Helmholtz Mechanism:

It assumed that the only source of radiation of the star is by loss of gravitational potential energy. This allows to calculate a Kelvin-Helmholtz timescale, i.e. an estimation of the age of the Sun based on constant luminosity and that the initial radius of the Sun was much larger than the current one. The K-H timescale gives an age of ~10e7 years, which clashes with estimations of the age of rocks on the Moon, of 4*10e9 years. Thus another source of energy must exist: it is, as you perhaps already know, nuclear fusion reactions.

The nuclear timescale gives times of ~10e10 years.

Nuclear fusion releases energy as a product of mass loss. For example, a reaction commonly occurring in the center of a star is the fusion of 4 hydrogen atoms into one helium atom, in this process there is necessarily a mass loss of the 0.7%, which is equal (using Einstein’s equation, E = mc2) to 26.71 eV (which is quite a lot). This energy is released due to changes in the configuration of the system, and is called binding energy, Eb.

Factors influencing nuclear reaction rates:

  • Energy distribution: We can use Maxwell-Boltzmann to determine the proportion of particles within a given range of energies.
  • Interaction probability: Introduction of a cross-section = number of reactions per target nucleus per unit time, divided by the flux of incident particles. It is nothing else that the reaction probability.
  • Coulomb Potential Energy Barrier: needs to be overcame for nuclear reactions to take place. It is due to the electrostatic repulsion between nuclei, which at some point is overwhelmed by the attractive strong nuclear force.

    Example of Coulomb potential energy barrier. Source.

    Example of Coulomb potential energy barrier. Source.

  • Quantum tunnelling: due to the Uncertainty Principle, knowing the energy of a particle with a certain precision, there is a certain inprecision about its position, and viceversa. This can be translated to the potential energy curve above as: if the particle has energy near the peak, the peak is thin enough so that the particle is not defined to be on one side or the other of the barrier. It jumps between one side and the other, so to say. Not even that, it is at both sides simultaneously. This makes possible for the particle to jump across the barrier, even if it does not have enough energy to do it, classically speaking. Including tunnelling, the temperature estimated to overcome a barrier is:
    T-quantum
    The “Z” are nuclear charges. e is the charge of an electron. µm is the reduced mass of the two nuclei. k is Boltzmann’s constant, and h is Planck’s constant.
  • Electron screening: The free electrons resulting from ionization lower the Coulomb barrier because they screen the positive charge of the nuclei.
    ueff
    The potential added from electron screening is always negative.

We obtain two terms on the energy, that affect the reaction rate for nuclear fusion, a Boltzmann term and a Coulomb term:

energy-terms

The product of these gives a peak (Gamov Peak):

Representation of Gamov Peak, from contributions of Boltzmann-Maxwell distribution, and Coulomb barrier (tunnelling). Source.

Representation of Gamov Peak, from contributions of Boltzmann-Maxwell distribution, and Coulomb barrier (tunnelling). Source.

We can write the energy as a power law centered in a particular temperature range.

energy-nuclear

Once we have an idea of the energy caused from nuclear fusion, we can have the total energy:

tot-e-grav-nuc

Then, the luminosity gradient is:

L-gradient

Lr is the interior luminosity, i.e. the luminosity within a sphere of radius r. Here we use the same idea of spherical shell as in the previous post.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , | Leave a comment

Hydrostatic Equilibrium

The internal structure of stars cannot be directly observed (an exception is neutrino emission from the stellar core, since neutrinos have a hard time interacting with… anything else, they are not blocked by the upper layers of the stellar body), thus it must be described by computational models in agreement with physical laws and with observable features of the stars.

The stellar evolution is controlled by the equilibrium between gravitational force, compressing the star, and hydrostatic pressure, expanding it.

For spherically symmetric objects we have the following condition for hydrostatic equilibrium:

hydrost-eq

If the acceleration can be neglected (which is the case if changes in the star happen over very long times -wouldn’t be the case for pulsing stars with periodic changes of only a few hours), then:

hydrost-eq-noacc

Note that a pressure gradient must exist to support the star (i.e. not just a pressure). The larger r is (r = distance from the center), the smaller is P.

Mass conservation equation:

masscons

Look at these differentials of mass (dM), pressure (dP), and distance (dr). To get a better idea of what we’re talking about, this is the model that we’re using:

star-model

Such pressure can be expressed as:

P-gas

This is simply the ideal gas equation, with a Maxwell-Boltzmann distribution of velocities for the particles. Since it is the pressure of an ideal gas, we may call it Pg.

Now, there are two kinds of particles in the universe: fermions (what we usually call matter, e.g. protons, electrons, neutrons, atoms, molecules, etc.) and bosons (force-carriers, e.g. photons).

If we had to take in account the Heisenberg Uncertainty Principle and the Dirac Exclusion Principle (roughly speaking, fermions don’t want to be similar to each other, or a bit more technically, two fermions cannot be simultaneously in a identically equal state), we shouldn’t take the Maxwell-Boltzmann (MB) distribution, but the Fermi-Dirac distribution. For photons, likewise, we cannot use the MB distribution, we need the Bose-Einstein distribution instead.

However, this only makes a difference in cases of extremely dense matter, e.g. white dwarfs, or neutron stars. For most of the cases (sufficiently low densities and velocities), we do just fine with the MB distribution.

Still, the radiation pressure Prad, the pressure exercised by photons, must be taken into account. Assuming blackbody radiation, this can be calculated as a function of T as:

p-rad

Remember that σ is the Stefan-Boltzmann constant.

Thus, the total pressure Pt, is:

p-tot

Here µ is something called mean molecular weight. It is simply a mass weighting factor for expressing the average mass of a gas particle (averagemass) in units of the mass of a hydrogen atom (mH):

meanmolecweigth

The radiation pressure can be larger than the pressure of the ideal gas, and even larger than the pressure caused by gravity (in which case, the star would be expanding).

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | 1 Comment

Stellar Atmospheres

Radiation emitted by a star: Can be measured in terms of the specific intensity.

specific-intensity

It corresponds to the amount of energy of a given wavelength per unit time dt crossing a unit area dA into a solid angle dΩ (think of the solid angle as a 2D angle in 3D space).

For a blackbody:

bb-intensity

The mean intensity <Iλ> is the result of averaging the specific intensity over the solid angle (i.e. integrating it).

For an isotropic radiation field (i.e. the same in all directions):

I-eq-I

Energy density: energy per unit volume with a wavelength between λ and λ+.

energy-density

The total energy density u is obtained by integrating over all wavelengths.

Radiative flux: Net energy having a wavelength between λ and λ+ passing each second through a unit area in the direction perpendicular to the surface (we are talking about the surface of the star, or about an imaginary -“mathematical”- surface).

radiativeflux

For an isotropic radiation field there is no energy transfer (Fλ = 0).

Radiation pressure: pressure caused by a beam of light, and in particular by the momentum of the photons, p = E/c.

radiationpressure

The total radiation pressure Prad is obtained by integrating the previous expression over the wavelength λ.

For a blackbody, the radiation pressure is simply u/3.

Stars are not blackbodies. The spectra of a star show absorption lines. In particular, the absorption lines produced by the metallic elements of a star are called line blanketing. This is a phenomenon taking place in the atmosphere of a star.

Stars are not in thermodynamic equilibrium either, since there is a net outward flow of energy. The temperature of a star varies with the position, thus there are fluxes of gas travelling across the star between zones of different temperature. We can assume, however, zones of local thermodynamic equilibrium (LTE), if the mean free path of the particles (the distances travelled by particles and photons between collisions) is small in comparison with the distances over which the temperature changes significantly.

In such conditions, we discuss the absorption of photons of a beam of light as it travels through the stellar atmosphere.

As the beam travels and photons are absorbed by stellar gas, the change in intensity of the beam is:

dI

thus the intensity decreases with the distance (obviously).

ds is the distance traveled. ρ is the density of the medium. κλ is the opacity, and depends on the composition of the gas phase, the density, and the temperature.

Optical depth, τλ: Is a measure of the transparency of a medium over a certain distance ds.

optical-length

The optical depth is not a distance, it is a dimensionless unit and increases monotonically with the path’s length ds. For a beam of light travelling from position 0 (with τλ) to position 1 (τλ,1 = 0):

d-optical-depth

The relation between optical depth and intensity results:

optdepth-intensity

The optical depth can be though of as the number of mean free paths from the original position to the surface, as measured along the ray’s path.

optical-depth-range

Posted in Uncategorized | Tagged , , , , , , , , , , | 1 Comment